Finetune_GPT-J_6B_8-bit icon indicating copy to clipboard operation
Finetune_GPT-J_6B_8-bit copied to clipboard

Fine-tuning GPT-J-6B on colab or equivalent PC GPU with your custom datasets: 8-bit weights with low-rank adaptors (LoRA)

Results 3 Finetune_GPT-J_6B_8-bit issues
Sort by recently updated
recently updated
newest added

Hi @sleekmike Great work on the notebook. I just wanted to check on the possibility of fine tuning the pythia 12B or any smaller variant. I have some specific use...

When I run this example in Jupyter Lab and start finetuning the codeparrot example I get the following error message: RuntimeError: Output 0 of DequantizeAndLinearBackward is a view and is...

Hello! After some epoch get RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 And saved model stops work ((