FastChat
FastChat copied to clipboard
The <eos> token randomly pops out during the inference, making the text generation stops early.
Hello,
Several people have reported having this issue for the v1.1 model, which wasn't a bug for the v1.0 one. Do you plan on fixing that in the future?
same question, anyone knows how to stop the generation?
Same issue. <\s> randomly shows up from time to time in the middle of the assistant's answer.
I also found this several times. I guess it is a problem with our training data but I haven't found the reason. Could you try some pre or post-processing as a workaround for now?