[Bug]: Banned EOS_TOKEN still stopping generation
Your current environment
The output of `python env.py`
```text Docker container: https://hub.docker.com/layers/alpindale/aphrodite-openai/v0.6.2/images/sha256-36d2ba5ad90b154ec9066a170e2569b2824e614b567c0bed73ecc2c2488d5480 ```🐛 Describe the bug
When you ban the EOS_TOKEN using custom_token_bans, there is still a chance for the AI to generate an EOS_TOKEN. Because the token bans only get checked after a stop check, the custom_token_ban for EOS_TOKEN gets ignored, and the AI stops generating more text, resulting in a "stop" message being sent back instead of "length".
Expected behaviour: When an EOS_TOKEN is inside the custom_token_bans, the flag ignore_eos should be set to True since it is found in custom_token_bans.
Workaround: Use ignore_eos and custom_token_bans to eliminate EOS_TOKEN.
That's a bit odd, which version are you using? FYI custom token bans were disabled between 0.6.0 and 0.6.1.post1, and enabled only on 0.6.2 from #751
If you're on the latest, then have you tried min_tokens? It achieves the same result, by banning EOS and stop strings until min_tokens have been generated.
I am using a combination of 0.5.4-dev and 0.6.2, but they both exhibited the same behaviour.
Have not tried min_tokens, but the issue is that if i ban the EOS_TOKEN, it should never trigger a stop.
Another way to test this would be to do this setup:
Set the token "_world" to be banned Set "world" as the stop word. Have the AI generate more on the sentence "Hello world! Hello world! Hello"