How to specify visible gpu in full sft/lora sft?
Reminder
- [X] I have read the README and searched the existing issues.
Reproduction
Hi~
I have 8 GPUs in one single node. I tried to specify GPUs but failed when using examples/full_multi_gpu/single_node.sh and examples/lora_multi_gpu/single_node.sh to fine-tune llama2.
- full sft: examples/full_multi_gpu/single_node.sh
Command: CUDA_VISIBLE_DEVICES=4,5,6,7 bash single_node.sh
Result: GPU 0-3 was used
- lora sft: examples/lora_multi_gpu/single_node.sh
I noticed there is a CUDA_VISIBLE_DEVICES setting in scripts, so I changed the first line of the script: CUDA_VISIBLE_DEVICES=4,5,6,7 accelerate launch \
Command: bash single_node.sh
Result: GPU 0-3 was used
Thanks for your help.
Expected behavior
No response
System Info
No response
Others
No response
use deepspeed --include "localhost:0,1,2,3" in full_multi_gpu/single_node.sh
use
deepspeed --include "localhost:0,1,2,3"in full_multi_gpu/single_node.sh
Thanks for your reply!
Is the setting in lora_multi_gpu/single_node.sh right if I need to specify the last 4 GPUs for lora sft? CUDA_VISIBLE_DEVICES=4,5,6,7 accelerate launch \
It should be right