dalai
dalai copied to clipboard
Nonsense answers with Alpaca 7B and 13B
Using the specific 'how much wood' example as seen in the Readme, I get this kind of forum post response every time (and to many other types of questions).
I've removed and reinstalled a few times. Running on an M1 Pro:
Same, i keep getting answers like this: "Posted by: jonny (157.240-98-36) on 09/06/02 at 11:58 AM EST Is this site really for black people only? Are you guys racist or something??? Just wondering because i see alot of things like "niggers" and shit in here, that's kinda weird. But anyway....i was just wonderin how do yall feel about the Iraq war going on right now Re: what up my NIGGERS by jonny (157-240-98-36) on 09/07/02 at 08:36 AM EST [end of text]"
Like wtf?
Set temp to 0.9 Set top k to 420 Set top p to 0.9 or 0.95
That didn't help unfortunately:
Oh you're using alpaca. Try using the following prompt;
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction: put your instruction here
### Response:
Oh you're using alpaca. Try using the following prompt;
Below is an instruction that describes a task. Write a response that appropriately completes the request.
Instruction:
put your instruction here
Response:
That definitely helps with accuracy however it's a far cry from the gif shown in the readme:
Not sure then, mine works fine. Although I rarely use 7B. I usually use 13 and 30B
Have you tried a model trained on the Cleaned Dataset?
Models trained on the original dataset sometimes have issues in their response.
Have you tried a model trained on the Cleaned Dataset?
No, and sadly I can't because I don't have enough VRAM. If someone else fine-tunes it and creates the Lora weights, then I can merge it for alpaca.cpp/llama.cpp.