alpaca-lora
alpaca-lora copied to clipboard
How to improve training efficiency and shorten training time
I have two T4 on my machine, and I want to improve training efficiency, because it has enough memory when I use the default params
I tried to update the batch_size to 256 but it doesn't seem to be working
maybe deepspeed you can try
you should increase micro_batch_size param @Tungsong