FastChat
FastChat copied to clipboard
gen_model_answer.py on peft adapter gets problem
I have a peft adapter model for a finetuned Falcon7b model, When using gen_mode_answer.py, i get this error: TypeError: PeftModelForCausalLM.generate() takes 1 positional argument but 2 were given python gen_model_answer.py --model-path /falcon_finetuned/trained_model/ --model-id falcon_finetuned
were you able to fix it ?
Not yet. I don't know why the inference works fine, but this doesn't. Are you having the same problem?
Yes but i am running llama2Thanks & RegardsMayur KOn Aug 6, 2023, at 20:17, Clemente-H @.***> wrote: Not yet. I don't know why the inference works fine, but this don't. Are you having the same problem?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
Found this https://github.com/huggingface/peft/issues/708. It help fixing this issue
I will try that. Thankyou.On Aug 6, 2023, at 23:45, Clemente-H @.***> wrote: Found this huggingface/peft#708. It help fixing this issue
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
I add the following code in the https://github.com/lm-sys/FastChat/blob/main/fastchat/llm_judge/gen_model_answer.py#L97, and it works.
if isinstance(model, PeftModelForCausalLM):
model = model.merge_and_unload()