litgpt
litgpt copied to clipboard
Fine-tunning a model for code generation
Hello all, I am trying to fine tune Falcon-7b for instruction related to code generation. The code I want to generate is Taipy format (https://www.taipy.io/)
Here is a sample of instruction set: instruction | input | output Visualize the sales data over time in a line chart based on dates. | | <|{transformed_data}|chart|type=lines|x=Date|y=Sales|>
I tried to fine-tune using adapter but the result is disappointing. (https://github.com/Lightning-AI/lit-gpt/blob/main/finetune/adapter.py)
Whatever prompt I sent, the response is just a link to dictionary chart images.
Should I use another pre-train model other than text related models such as Falcon, RedPajama etc?
What needed to be done if I want to revise the fine-tuning script to fit with code generation task?
Thank you