openai-java
openai-java copied to clipboard
Query with ".echo(false)" has empty response (when completing lists)
Hi all,
I have a strange behaviour. When I ask GPT-3 to continue a list AND set "echo" to false, it returns me an empty text. When I do the same in playground (I uncheck "Inject start text" there assuming this is equal to echo=false), it works fine.
For example I give this prompt to GPT-3:
Title above list is Fruits
Write a couple of next list items:
- Apple
- Orange
- Grape
Code:
CompletionRequest completionRequest = CompletionRequest.builder()
.model("text-davinci-003")
.prompt(prompt__see__above)
.maxTokens(tokensLimit)
.temperature(0.7)
.echo(false)
.build();
CompletionResult result = service.createCompletion(completionRequest);
And see a single CompletionChoice:
text: "",
index: 0,
logprobs: null,
finish_reason: "stop"
Do you have an idea of workaround of this?
UPD.
Just checked with curl
. Same result as in Playground, it works.
Command:
curl https://api.openai.com/v1/completions \
-H 'Content-Type: application/json' \
-H "Authorization: Bearer MY_API_KEY" \
-d '{
"model": "text-davinci-003",
"prompt": "Title above list is Fruits\n\nWrite a couple of next list items\n\n- Apple\n- Orange\n- Grape",
"max_tokens": 256,
"temperature": 0.7,
"echo": false
}' \
--insecure
Response:
{"id":"SOME_ID","object":"text_completion","created":1674921722,"model":"text-davinci-003","choices":[{"text":"\n- Banana\n- Strawberries","index":0,"logprobs":null,"finish_reason":"stop"}],"usage":{"prompt_tokens":25,"completion_tokens":7,"total_tokens":32}}
I can only guess some wrong parsing of a resoponse (it starts with "\n" - does it matter?), but didn't go deeper.
P. S.: it would be nice CompletionRequest to have some method like "createCurlCommand()" for debugging such cases.
Not sure if this helps. I tried it out on the playground and in code. At first I thought that maybe you were using the wrong model, but it looks like "text-davinci-insert-002" has been deprecated so you just use text-davinci-003.
In the playground, the first call worked, but repeated calls got the below warning:
The model predicted a completion that begins with a stop sequence, resulting in no output. Consider adjusting your prompt or stop sequences.
So maybe it's a stop sequence issue? Maybe instead of using the default, use a visible stop sequence (like "!####!") and then replace them w/ newlines when you need to display it?
@cryptoapebot
Looks like you are right as far in the result there are 26 tokens used for prompt and 0 for generated text.
But setting a stop sequence did not help me neither in Playground nor in Java code. Meanwhile Playground (or curl tool) processes input a couple of times and only then stops a generaion, Java code stops literally on the first try. I do not understand the difference.
Well, adding "\n" at the end of promt is a solution for me.
The only strange why curl
does not require this.