Covering 12 Major Languages including English, Chinese, French, Hindi, Spanish, Arabic, Russian, Japanese, Korean, German, Italian, Portuguese and 38 Minor Languages So far.
📃 Paper • 🌐 Demo • 🤗 ApolloMoEDataset • 🤗 ApolloMoEBench • 🤗 Models • 🌐 Apollo
- [2024.10.15] ApolloMoE repo is published!🎉
12 Major Languages and 38 Minor Languages
🤗 Apollo2-0.5B • 🤗 Apollo2-1.5B • 🤗 Apollo2-2B • 🤗 Apollo2-3.8B • 🤗 Apollo2-7B • 🤗 Apollo2-9B
🤗 Apollo-MoE-0.5B • 🤗 Apollo-MoE-1.5B • 🤗 Apollo-MoE-7B
- 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|>
- 2B, 9B: User:{query}\nAssistant:{response}<eos>
- 3.8B: <|user|>\n{query}<|end|><|assisitant|>\n{response}<|end|>
- 0.5B, 1.5B, 7B: User:{query}\nAssistant:{response}<|endoftext|>
-
Dataset 🤗 ApolloMoEDataset
The complete data is stored in
ApolloMoEDataset.json
, while a sample shown inApolloMoEDataset_sample.json
-
Evaluation 🤗 ApolloMoEBench
Click to expand
-
EN:
- MedQA-USMLE
- MedMCQA
- PubMedQA: Because the results fluctuated too much, they were not used in the paper.
- MMLU-Medical
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
-
ZH:
- MedQA-MCMLE
- CMB-single: Not used in the paper
- Randomly sample 2,000 multiple-choice questions with single answer.
- CMMLU-Medical
- Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology
- CExam: Not used in the paper
- Randomly sample 2,000 multiple-choice questions
-
ES: Head_qa
-
FR:
- Frenchmedmcqa
- [MMLU_FR]
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
-
HI: MMLU_HI
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
-
AR: MMLU_AR
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
-
JA: IgakuQA
-
KO: KorMedMCQA
-
IT:
- MedExpQA
- [MMLU_IT]
- Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine
-
DE: BioInstructQA: German part
-
PT: BioInstructQA: Portuguese part
-
RU: RuMedBench
-
We take Apollo-MoE-0.5B as an example
-
Login Huggingface
huggingface-cli login --token $HUGGINGFACE_TOKEN
-
Download model to local dir
from huggingface_hub import snapshot_download import os local_model_dir=os.path.join('/path/to/models/dir','Apollo-MoE-0.5B') snapshot_download(repo_id="FreedomIntelligence/Apollo-MoE-0.5B", local_dir=local_model_dir)
-
Inference Example
from transformers import AutoTokenizer, AutoModelForCausalLM, GenerationConfig import os local_model_dir=os.path.join('/path/to/models/dir','Apollo-MoE-0.5B') model=AutoModelForCausalLM.from_pretrained(local_model_dir,trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained(local_model_dir,trust_remote_code=True) generation_config = GenerationConfig.from_pretrained(local_model_dir, pad_token_id=tokenizer.pad_token_id, num_return_sequences=1, max_new_tokens=7, min_new_tokens=2, do_sample=False, temperature=1.0, top_k=50, top_p=1.0) inputs = tokenizer('Answer direclty.\nThe capital of Mongolia is Ulaanbaatar.\nThe capital of Iceland is Reykjavik.\nThe capital of Australia is', return_tensors='pt') inputs = inputs.to(model.device) pred = model.generate(**inputs,generation_config=generation_config) print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
(Optional) Custom Model as Base
Click to expand
copy /path/to/your/configuration_upcycling_qwen2_moe.py /path/to/src/variants/moe_initilization/configuration_upcycling_qwen2_moe.py
copy /path/to/your/modeling_upcycling_qwen2_moe.py /path/to/src/variants/moe_initilization/modeling_upcycling_qwen2_moe.py
cd /path/to/src/variants/moe_initilization
bash convert.sh
Full-finetune on Base Model
Click to expand
We take Apollo2-7B or Apollo-MoE-0.5B as examples
-
Download and extract data:
- Dowload Dataset and Benchmark firstly
- Extract major or minor data part according to your needs:
bash 0.extract_data.sh
-
Prepare test and dev data for specific model:
- Create test data for with special token
bash 1.data_process_test&dev.sh
-
Prepare train data for specific model (Create tokenized data in advance):
- You can adjust data Training order and Training Epoch in this step
bash 2.data_process_train.sh
-
Train the model
- If you want to train in Multi Nodes please refer to ./src/sft/training_config/zero_multi.yaml
bash 3.single_node_train.sh
-
Evaluate your model: Generate score for benchmark
bash 4.eval.sh
Please use the following citation if you intend to use our dataset for training or evaluation:
@misc{zheng2024efficientlydemocratizingmedicalllms,
title={Efficiently Democratizing Medical LLMs for 50 Languages via a Mixture of Language Family Experts},
author={Guorui Zheng and Xidong Wang and Juhao Liang and Nuo Chen and Yuping Zheng and Benyou Wang},
year={2024},
eprint={2410.10626},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.10626},
}