Kushalamummigatti

Results 8 issues of Kushalamummigatti

Am trying to finetune codellama with the same idea of llama2 and using the same script to finetune. Am not sure whether am right as the repo or blog not...

model-usage
fine-tuning

### Please check that this issue hasn't been reported before. - [X] I searched previous [Bug Reports](https://github.com/OpenAccess-AI-Collective/axolotl/labels/bug) didn't find any similar reports. ### Expected Behavior Should be able to run...

bug

I have tried to finetune Codegen 2B Mono on 40GB GPU (single card) with sequence length set as 256. It gave CUDA memory out of error. What is the GPU...

I have tried to finetune 2B on 40GB GPU which i faced memory out of error. Any suggestion to finetune 2B and above models?

Trying to fine tune bigcode/starcoderbase model on compute A100 with 2 GPUs , 40 GBx2 so 80GB. Finetune.py is slightly modified and loaded the model with 4bit, adopt Qlora and...

what was the maximum sequence length used for finetuning starcoder to produce star chat alpha? Was it done on a single GPU card or multiple cards? Please provide insights on...

what is the max token size for finetuning? Please provide insight on the hardware requirement for fine tuning the model? Can i finetune with 4 GPU of 16gb each?