Benwei Lu
Benwei Lu
> hello, did you find the right source? upd: wrote script. you can find it in my fork Thanks a lot ,your work is amazing
open `utils.py` and comment line 75,76,98,99, which corresponds to ```python3 if not resume: copy_cur_env(work_dir, exp_path+ '/' + exp_name + '/code', exception) ```
Same issues here
I think I might know why , I upload a paper with sevealr formulas and charts,I got an erroe,but when I upload a file just words ,then it would work...
> You can try reducing max_ token or use gpt-3.5 turbo > > ``` > completions = openai.ChatCompletion.create( > model="gpt-3.5-turbo", > temperature=0, > messages=[ > {"role": "user", "content": prompt} >...
same feeling~
same issue here, but how?@
> I am using Lora to finetune Qwen-VL model, there are about 10,000 VQA data samples I used for finetuning, but the loss of final model is still high, I...
> 有个问题想请教各位大佬,根据finetune.py进行lora微调,这算是指令微调还是有监督精调啊,最近一直在纠结这个问题。 属于SFT(Supervised Fine-Tuning)监督微调
lora和Qlora确实都是SFT,至于指令微调目前还没有查到信息,如果有消息记得共享哦谢谢