alpaca-lora icon indicating copy to clipboard operation
alpaca-lora copied to clipboard

Generator Object

Open Alperemrehas opened this issue 1 year ago • 6 comments

I am having an issue with the generate.py. I have cloned the repo and after that finetune it: python finetune.py --base_model="decapoda-research/llama-7b-hf" --data-path "alpaca_data_tr.json"

I saved the output to ./lora_alpaca directory. But whenever I tried to generate response: python generate.py --base_model='decapoda-research/llama-7b-hf' --lora_weights='./lora-alpaca/' --load_8bit

The Instruction is: Tell me about alpacas.

The output is : <generator object main..evaluate at 0x7fba364f23b0>

I am working in an offline environment and downloaded the decapoda-research/llama-7b-hf from https://huggingface.co/decapoda-research/llama-7b-hf/tree/main

I already checked if the issue with Gradio from getting the response with the terminal faced the same result.

Alperemrehas avatar Jul 21 '23 11:07 Alperemrehas

Same issue

zl-liu avatar Aug 08 '23 04:08 zl-liu

return output instead of yield. That works for me.

GasolSun36 avatar Aug 21 '23 13:08 GasolSun36

return output instead of yield. That works for me.

Didn't worked out for me, but thank you.

Alperemrehas avatar Sep 04 '23 08:09 Alperemrehas

Same issue

Did you solved your issue?

Alperemrehas avatar Sep 04 '23 08:09 Alperemrehas

Remove the code of the situation stream_output is True, and replace yield prompter.get_response(output) with return prompter.get_response(output) (the code of gr.Interface() is also removed). That works for me. However, it is really strange because this part of code should be unused since the parameter stream_output is set to be False.

CadenShi avatar Sep 30 '23 03:09 CadenShi

@CadenShi Exactly! Your little fix works, but I cannot understand why the stream_output thing was being called in the first place even if set to False Thanks anyways!

PoptropicaSahil avatar Oct 23 '23 07:10 PoptropicaSahil