Could I use the pre-fintuned model to proceed my fine-tuning?
Great work ! I have a few questions:
- Could I specify an epoch number, instead of the 'train_iters' in config.yaml ?
- Could I load the pre-finetuned model and proceed the fine-tuning? If yes what parameter should I specify in config.yaml?
Looking forward to your answers.
Best wishes.
- You can just set train_iters = epochs * dataset_size
- just set "load" to the pre-finetuned model's directory path.
Thanks for your swift feedback, sincerely !
By the way, is the random seed is frozen when running the " bash inference.sh" ? If yes, can I just change seed via adding "--seed $RANDOM" in the script ? (like the "finetune_single_gpu.sh" way)
Best wishes.
Setting the random seed can only make the script run with the same result each time, but it cannot make the same result in multiple runs of the same script. This is determined by the nature of the random seed.
I'm a bit confused. In fact, I want to obtain diverse results with same prompt.
- Do you mean that setting "--seed $RANDOM" , in
inference.sh, then running this script multiple times would offer me diverse results? 2)By specify exact seed, like--seed 13131313, I would get same results, even running multi-times. Is it right?
I'm a bit confused. In fact, I want to obtain diverse results with same prompt.
- Do you mean that setting "--seed $RANDOM" , in
inference.sh, then running this script multiple times would offer me diverse results? 2)By specify exact seed, like--seed 13131313, I would get same results, even running multi-times. Is it right?
Yes, you are right