KoboldAI-Client
KoboldAI-Client copied to clipboard
Bug: Escape sequences and special tokens get handled incorrectly for both user inputs and model outputs.
Descriptions
From the model outputs, in #gametext
escape sequences such as \\n
and \\r
become a "New line".
From user inputs, in #gametext
escape sequences such as \n
and \r
, special tokens such as </s>
all become a "New line".
Examples
Example 1
User inputs:
</s>The story of the holy knight
In the hidden city of
(the OPT and OPT-like models use the </s>
token to indicate the start of a document/text block/story, without this token it is more likely for the model to "think" it is in the middle of a document, as opposed to the beginning. With only sending the </s>
token, you can generate a "truly random" story.)
Text displayed in #gametext
:
The story of the holy knight
In the hidden city of
Inputs send to the model:
\nThe story of the holy knight\nIn the hidden city of
Expected behaviors
Text displayed in #gametext
:
</s>The story of the holy knight
In the hidden city of
Inputs send to the model:
</s>The story of the holy knight\nIn the hidden city of
Example 2
Model outputs:
$$L_{KL}(p,q) = \\sum_{j=0}^{k-1} p(j)\\log\\left(\\frac{p(j)}{q(j)}\\right)$$
(pay attention to the last few characters, the \\right
part.)
(Galactica from Facebook will generate such output, and I would believe other models capable of writing code will be able to generate some combination of \\r
, \\n
, or other escape sequences too.)
Text displayed in #gametext
:
$$L_{KL}(p,q) = \sum_{j=0}^{k-1} p(j)\log\left(\frac{p(j)}{q(j)}
ight)$$
Expected behavior
Text displayed in #gametext
:
$$L_{KL}(p,q) = \sum_{j=0}^{k-1} p(j)\log\left(\frac{p(j)}{q(j)}\right)$$
For some models this is intentional because the model does not have native new line support. These run in S mode (FSD based models). OPT should be running in NS mode by default which should have proper new line behavior.
Is it specifically OPT you are having this issue with? If not you should be able to manually change the mode to ns mode in the models .settings file.
For some models this is intentional because the model does not have native new line support. These run in S mode (FSD based models). OPT should be running in NS mode by default which should have proper new line behavior.
Yes, indeed. In the setting menu within WebUI, I can turn on debug mode which opens an information box at the bottom. Which shows Newline Mode: ns
.
Is it specifically OPT you are having this issue with? If not you should be able to manually change the mode to ns mode in the models .settings file.
No, I was using Facebook/Galactica model when I discovered the issue. But the problem persists with the OPT-nerys model, even with manually changing the mode to ns mode in the models .settings file.
However, I need to correct myself. I've discovered the issues are due to the WebUI.
In WebUI:
- When only using the input box to interact with the model, the console shows the correct inputs to the model.
- When using the
Allow Editing
toggle to interact with the model, the console shows the incorrect behavior described in the original issue descriptions. - However, whatever method you use to interact with the model, the WebUI will display escape sequences and special tokens incorrectly.