CodeUp icon indicating copy to clipboard operation
CodeUp copied to clipboard

CodeUp: A Multilingual Code Generation Llama2 Model with Parameter-Efficient Instruction-Tuning on a Single RTX 3090

Results 6 CodeUp issues
Sort by recently updated
recently updated
newest added

I used the same prompt to generate a piece of code using codeup in my local system. It seemed to give out this timeout error. ``` Traceback (most recent call...

Where can I find the codeup_190k.json file ? I want to do the training with this data. Thanks.

python3 finetune.py --base_model='TheBloke/Dolphin-Llama2-7B-GPTQ' --data_path='data/codeup_19k.json' --num_epochs=10 --cutoff_len=512 --group_by_length --output_dir='./test-llama-2/7b' --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' --lora_r=16 --micro_batch_size ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes and submit this information together with...

是这样的,我的数据集是中文+代码的,不知道能不能用您这个微调呢,感觉可以试试看

Great job What do you think about rebuilding the finetuning script and adding additional fields like programming language etc. What would an ideal data structure look like? I have similar...

@juyongjiang Thank you for this great work! How to finetune the model using less memory? I'm facing CUDA OOM while trying to finetune on google colab pro with T4 15...