-
Notifications
You must be signed in to change notification settings - Fork 457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Llama3-70B LoRA multi GPU #802
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,100 @@ | ||
# Config for multi-device LoRA in lora_finetune_distributed.py | ||
# using a Llama3 70B model | ||
# | ||
# This config assumes that you've run the following command before launching | ||
# this run: | ||
# tune download meta-llama/Meta-Llama-3-70b --hf-token <TOKEN> --output-dir /tmp/Meta-Llama-3-70b --ignore-patterns "original/consolidated*" | ||
# | ||
# This config needs 8 GPUs to run | ||
# # tune run --nproc_per_node 8 lora_finetune_distributed --config recipes/configs/llama3/70B_lora.yaml | ||
# | ||
|
||
# Model Arguments | ||
model: | ||
_component_: torchtune.models.llama3.lora_llama3_70b | ||
lora_attn_modules: ['q_proj', 'k_proj', 'v_proj'] | ||
apply_lora_to_mlp: False | ||
apply_lora_to_output: False | ||
lora_rank: 16 | ||
lora_alpha: 32 | ||
Comment on lines
+16
to
+19
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. How are these defaults set? Are rank and alpha higher than our 7/8B defaults cause the embedding dim is larger? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I mostly copied these from https://github.com/pytorch/torchtune/blob/main/recipes/configs/llama2/70B_lora.yaml. I'm not sure whether rank and alpha being higher is because of the embedding dim being larger - what do those have to do with each other? |
||
|
||
tokenizer: | ||
_component_: torchtune.models.llama3.llama3_tokenizer | ||
path: /tmp/Meta-Llama-3-70b/original/tokenizer.model | ||
|
||
checkpointer: | ||
_component_: torchtune.utils.FullModelHFCheckpointer | ||
checkpoint_dir: /tmp/Meta-Llama-3-70b | ||
checkpoint_files: [ | ||
model-00001-of-00030.safetensors, | ||
model-00002-of-00030.safetensors, | ||
model-00003-of-00030.safetensors, | ||
model-00004-of-00030.safetensors, | ||
model-00005-of-00030.safetensors, | ||
model-00006-of-00030.safetensors, | ||
model-00007-of-00030.safetensors, | ||
model-00008-of-00030.safetensors, | ||
model-00009-of-00030.safetensors, | ||
model-00010-of-00030.safetensors, | ||
model-00011-of-00030.safetensors, | ||
model-00012-of-00030.safetensors, | ||
model-00013-of-00030.safetensors, | ||
model-00014-of-00030.safetensors, | ||
model-00015-of-00030.safetensors, | ||
model-00016-of-00030.safetensors, | ||
model-00017-of-00030.safetensors, | ||
model-00018-of-00030.safetensors, | ||
model-00019-of-00030.safetensors, | ||
model-00020-of-00030.safetensors, | ||
model-00021-of-00030.safetensors, | ||
model-00022-of-00030.safetensors, | ||
model-00023-of-00030.safetensors, | ||
model-00024-of-00030.safetensors, | ||
model-00025-of-00030.safetensors, | ||
model-00026-of-00030.safetensors, | ||
model-00027-of-00030.safetensors, | ||
model-00028-of-00030.safetensors, | ||
model-00029-of-00030.safetensors, | ||
model-00030-of-00030.safetensors, | ||
] | ||
Comment on lines
+28
to
+59
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Lol we really need to have a way to generate these programmatically |
||
recipe_checkpoint: null | ||
output_dir: /tmp/Meta-Llama-3-70b | ||
model_type: LLAMA3 | ||
resume_from_checkpoint: False | ||
|
||
# Dataset and Sampler | ||
dataset: | ||
_component_: torchtune.datasets.alpaca_dataset | ||
train_on_input: True | ||
seed: null | ||
shuffle: True | ||
batch_size: 2 | ||
|
||
# Optimizer and Scheduler | ||
optimizer: | ||
_component_: torch.optim.AdamW | ||
weight_decay: 0.01 | ||
lr: 3e-4 | ||
lr_scheduler: | ||
_component_: torchtune.modules.get_cosine_schedule_with_warmup | ||
num_warmup_steps: 100 | ||
|
||
loss: | ||
_component_: torch.nn.CrossEntropyLoss | ||
|
||
# Training | ||
epochs: 1 | ||
max_steps_per_epoch: null | ||
gradient_accumulation_steps: 1 | ||
|
||
# Logging | ||
output_dir: /tmp/lora_finetune_output | ||
metric_logger: | ||
_component_: torchtune.utils.metric_logging.DiskLogger | ||
log_dir: ${output_dir} | ||
log_every_n_steps: null | ||
|
||
# Environment | ||
device: cuda | ||
dtype: bf16 | ||
enable_activation_checkpointing: True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure what the right place to do this is, but do we wanna mention memory requirements to run 70B somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, maybe we can add it to the table?