llama-lora-fine-tuning
llama-lora-fine-tuning copied to clipboard
training stuck
As the picture below,It didn't start training for a long time.Is it reasonable or the reason of V100 8-bit matmul is slow?
Thanks for your help.
Nothing to do with that warning, on V100,load tokenizer take half an hour