【Question】Question about initial finetune loss
Hello, recently I read a blog about colossial supporting lora finetune deepseek-v3, it is a very great work for opensource community. But I have a question about the picture in the blog.
My question is why the initial loss in this picture is so high, whether it is from scratch testing or due to other reasons?
As lora weight is initialied from random
Hello, recently I read a blog about colossial supporting lora finetune deepseek-v3, it is a very great work for opensource community. But I have a question about the picture in the blog.
My question is why the initial loss in this picture is so high, whether it is from scratch testing or due to other reasons?
R1 SFT Bug,loss should start from 1
https://zhuanlan.zhihu.com/p/26682456562