openai-cookbook
openai-cookbook copied to clipboard
Using 'logit_bias' to prevent chatgpt or gpt4 from stop generating
Hi,
as the example in the official documentation, we can pass logit_bias={"50256": -100} to the Completion API to prevent the <|endoftext|> token from being generated.
I'm trying to do the same for ChatCompletion but it seems doesn't work.
Here is an example:
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
completion = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hello!"}
],
logit_bias={'100257': -100}
)
where I get <|endoftext|> token id for gpt-3.5-turbo by:
import tiktoken
print(tiktoken.encoding_for_model('gpt-3.5-turbo').eot_token)
#results: 100257
I still getting "finish_reason": "stop" in the returned response.
Anything I did wrong?
I found a positive value works. If set logit_bias={'100257': 100}, the generation will stop immediately and return an empty string message. Seems only negative value doesn't work. Is this a bug or on purpose?
Chat Models stop generating when they produce the <|im_end|> special token of the cl100k_im encoding.
Note that the cl100k_base encoding does not include the <|im_start|> and <|im_end|> special tokens.
Unfortunately, the API does not currently allow passing logit_bias for these special tokens. I feel your pain.
This issue is stale because it has been open 60 days with no activity. Remove stale label or comment or this will be closed in 10 days.
This issue was closed because it has been stalled for 10 days with no activity.