codellama icon indicating copy to clipboard operation
codellama copied to clipboard

Loss calculation always 0

Open sanipanwala opened this issue 11 months ago • 4 comments

Hello,

I'm trying to fine-tune the 34B model but during fine-tuning, I always get a loss 0. While I was able to fine-tune 7B and 13B models but not 34B.

Let me know if I'm overlooking this or please give me suggestions.

Thanks.

sanipanwala avatar Feb 27 '24 08:02 sanipanwala

Hi @sanipanwala, we don't provide support for fine-tuning in this repository. Which tools are you using for this? Are you sure they support the 34B model well? The exact same setting works for 7B and 13B? In any case, a loss of 0 at the start of training is a good indication that something's going wrong.

jgehring avatar Feb 28 '24 07:02 jgehring

@jgehring I mean I'm using "codellama/CodeLlama-34b-hf" model and running a normal Python script and yes same configuration works with 7B and 13B.

Thanks.

sanipanwala avatar Feb 28 '24 09:02 sanipanwala

@sanipanwala Hi, have you solved this problem yet?

I found the same problem when trying to peft fine-tune CodeLLama-7B (using LlamaForSequenceClassification), the Loss is always 0 during the fine-tuning.

Thanks!

sssszh avatar Apr 03 '24 14:04 sssszh

Hi @sssszh ,

No, I haven't found any solution yet.

Thanks, Sani

sanipanwala avatar Apr 04 '24 03:04 sanipanwala