llm-foundry
llm-foundry copied to clipboard
How to use composer to train mpt-7b on a single gpu?
My server has 8 GPUs. I want to test it on single GPU. I set the num_workers=1 in the yaml file. But every time I run the command 'composer train.py yamls/mpt/finetune/try.yaml model.loss_fn=torch_crossentropy', there are still 8 ranks executing.
Hello @LisaWang0306 , thanks for the question! num_workers actually controls the number of CPU workers used for the dataloader. To use just one GPU, run something like:
composer -n 1 train.py yamls/mpt/finetune/try.yaml model.loss_fn=torch_crossentropy
The -n 1 tells composer to use only 1 rank. By default, composer will use all available GPUs. See composer --help for more information.
Hello @LisaWang0306 , thanks for the question!
num_workersactually controls the number of CPU workers used for the dataloader. To use just one GPU, run something like:composer -n 1 train.py yamls/mpt/finetune/try.yaml model.loss_fn=torch_crossentropyThe
-n 1tells composer to use only 1 rank. By default, composer will use all available GPUs. Seecomposer --helpfor more information.
Thanks very much for your reply! I will try soon.
Hi @LisaWang0306 , closing this for now. Please re-open if you run into any issues!