CodeGen icon indicating copy to clipboard operation
CodeGen copied to clipboard

How to fine tune Codegen?

Open smith-co opened this issue 3 years ago • 18 comments

I would like to fine tune the Codegen model. Can you provide any documentation in this regard?

smith-co avatar Jun 09 '22 05:06 smith-co

+1

shmuelhizmi avatar Jun 12 '22 07:06 shmuelhizmi

The converted PyTorch models can be fine-tuned similarly to other causal LMs in HuggingFace.

See tutorials like http://reyfarhan.com/posts/easy-gpt2-finetuning-huggingface/.

enijkamp avatar Jun 21 '22 20:06 enijkamp

Would you be releasing training code for the original models? Would be nice to try some on v3s (if possible).

TheodoreGalanos avatar Jun 25 '22 03:06 TheodoreGalanos

I think this script might help in finetuning:

https://colab.research.google.com/drive/13dZVYEOMhXhkXWfvSMVM1TTtUDrT6Aeh?usp=sharing#scrollTo=vCPohrZ-CTWu

thisisanshgupta avatar Jun 26 '22 06:06 thisisanshgupta

@TheodoreGalanos Working on a release for the JAX coding. I trained the models on TPU-v4 and have to resolve a blocker for v3.

enijkamp avatar Jun 28 '22 20:06 enijkamp

@enijkamp @thisisanshgupta I am checking the link you have shared.

Still I think it would greatly help everyone if it is possible to provide fine tuning steps in the repo. 🙏

smith-co avatar Jun 28 '22 22:06 smith-co

I for one would appreciate any code/directions needed to run things on a TPU-v4. Great work all!

Ontopic avatar Jul 13 '22 07:07 Ontopic

@thisisanshgupta @Ontopic Yes, I'm working on the release of my training library for TPU-v3/v4 and will keep you posted.

enijkamp avatar Jul 13 '22 07:07 enijkamp

Hello @enijkamp thank you for your work. Looking forward to some fine-tuning instructions and code.

Currently, I have tried to fine-tune as if it is GPT-2, but I am running into issues where the model's quality degrades significantly.

Is there any particular way the data has to be structured for fine-tuning? Currently, I am just concatenating together the prompts and code as follows:

def xyz():
    """abc"""
    code()

def xyz():
    """abc"""
    code()

tlkh avatar Aug 17 '22 05:08 tlkh

@smith-co @thisisanshgupta @tlkh

For torch, I wrote up a minimal example in deepspeed, which can train the 16B on a ~24 GB gpu. You would need to sanity test this, optimize the configuration, plug in the data loader, and save the weights to disk: https://github.com/salesforce/CodeGen/blob/main/jaxformer/hf/train_deepspeed.py

For jax, the training library in is undergoing sanity checks on TPU-v3 and should be released soon.

enijkamp avatar Aug 17 '22 05:08 enijkamp

@smith-co @thisisanshgupta @tlkh @Ontopic @TheodoreGalanos @shmuelhizmi A first release of the training code for TPU-v3/v4 is here:

https://github.com/salesforce/jaxformer

enijkamp avatar Sep 29 '22 20:09 enijkamp

@enijkamp I want to fine-tune the model with my own code data, how should I build the dataset. Are there any requirements for the format of the dataset, whether the data needs to be labeled and what format should it be labeled in. Can some guidance or examples be given, thanks!

zhangybuaa avatar Nov 02 '22 07:11 zhangybuaa

@smith-co @thisisanshgupta @tlkh

For torch, I wrote up a minimal example in deepspeed, which can train the 16B on a ~24 GB gpu. You would need to sanity test this, optimize the configuration, plug in the data loader, and save the weights to disk: https://github.com/salesforce/CodeGen/blob/main/jaxformer/hf/train_deepspeed.py

For jax, the training library in is undergoing sanity checks on TPU-v3 and should be released soon.

Besides the VRAM, how much RAM would be required to train the model?

calix avatar Nov 18 '22 17:11 calix

@enijkamp , or anyone who has used jaxformer to fine-tune on TPU-v4, what is the approximate cost?

glicerico avatar Jan 28 '23 01:01 glicerico

@glicerico Roughly speaking, cost is a function of the size of the model and data. How much data do you have? Which model do you want to fine-tune?

enijkamp avatar Jan 28 '23 20:01 enijkamp

@enijkamp , trying to reproduce the work by Shin and Van Durme, who used a few hundred (sentence, parse) pairs to fine tune codex for semantic parsing. I would like to do this with CodeGen. Seeing your results, I would probably want to fine tune the 16GB model.

glicerico avatar Jan 30 '23 04:01 glicerico

@enijkamp : I want to finetune mono model , Can you please share dataset format for python and details steps or notebook .

srnsrn120 avatar Feb 01 '23 14:02 srnsrn120

@glicerico Roughly speaking, cost is a function of the size of the model and data. How much data do you have? Which model do you want to fine-tune?

Is there any more easier code script template withouth deep-speed to fine-tune CodeGen(350M)? Plus: Is the data format same as other pre-trained model like CodeT5 or CodeBERT? Looking forward to the reply.

watreyoung avatar Jun 09 '23 15:06 watreyoung