FastChat
FastChat copied to clipboard
Question on Finetune
Hi , @Michaelvll
Thank you release code and model.
I want to finetune llama with my local datasets on FastChat Frame.
At the beginning of finetune, the loss is 0.53. when the iteration(step) at 100, the loss increase, nearly 13. Then the loss will decrease continually.
Is this normal or unnormal?
So long as the loss trends downward, it should be fine. You could try tweaking the learning rate to see if you can get better behavior.
closing as this is not related with the development of the repo.
Please try to figure out the appropriate hyperparameters on your own dataset. Some HPO is needed!