openai-node
openai-node copied to clipboard
The result is completely wrong for unknow reason
Describe the bug
Use the config below, get the unrelated content in the choices array. And put same prompt on Open AI Play ground, it returns right content ("The tl;dr version of this would be to simply say that the article is about the importance of choosing the right words when communicating, and that the wrong words can easily lead to misunderstanding.").
if I changed prompt: 'Tl;dr, summarize in one paragraph without bullet:\n in one paragraph without bullet.\n', it works fine. Try other content works fine. Just like there's some cache. However I tried to run from AWS lambda and local, both same wrong result.
config: { model: 'text-davinci-002', prompt: 'Tl;dr, summarize in one paragraph without bullet:\nsummarize in one paragraph without bullet.\n', temperature: 0.5, max_tokens: 320, best_of: 1, frequency_penalty: 0, presence_penalty: 0, logprobs: 0, top_p: 1 },
choices: [ { text: '\nThe article discusses the pros and cons of taking a gap year, or a year off between high school and college. The pros include gaining life experience, taking time to figure out what you want to study, and having the opportunity to travel. The cons include falling behind your peers academically, feeling out of place when you return to school, and struggling to find a job after graduation. Ultimately, the decision to take a gap year is a personal one and depends on what you hope to gain from the experience.', index: 0, logprobs: [Object], finish_reason: 'stop' } ],
To Reproduce
Simple use the same config above. It keeps happening to me, always same result.
Code snippets
No response
OS
Windows
Node version
Node v16
Library version
v3.0.1
It seems like the prompt is not taken into the API call properly.
@PrimeObjects I'm seeing pretty much exactly the same behavior between playground and the node package for this request:

Using the following snippet with your config:
const resp = await openai.createCompletion({
model: 'text-davinci-002',
prompt: 'Tl;dr, summarize in one paragraph without bullet:\nsummarize in one paragraph without bullet.\n',
temperature: 0.5,
max_tokens: 320,
best_of: 1,
frequency_penalty: 0,
presence_penalty: 0,
logprobs: 0,
top_p: 1
});
console.log(resp.data);
If you have a consistent way to replicate the issue, let me know. Changing temperature to 0 can hep remove additional randomness from the equation that can lead to different behavior.
Also – I'm kind of confused what you're use case is here – if you are trying to produce a TL;DR for an article or some piece of content, you'll need to pass that content as part of your prompt.