Jackson.zhang

Results 7 comments of Jackson.zhang

> I still haven't successfully had this running yet.. but I notice that using a _non-delta model_ only shows this error when including `--load-8bit` > > (running on mac) >...

> have you solved this problem ?

> Facing the same issue when running on cuda. > > need to reopen this issue @merrymercy > > edit: issue comes when using 8 bit quantization, due to the...

> You need to first convert the llama model weights to HF format. I have fine-tuning based on this model: https://huggingface.co/eachadea/vicuna-13b-1.1 after fine-tuning, this command is success: python3 -m fastchat.serve.cli...

你好,最新的微调代码哪里可以看到呢?

> 调下参吧,我也是写了代码,没有训练测试过