openai-cookbook
openai-cookbook copied to clipboard
Please update the guide with at least one complete example of how to use Stream=True for gpt-3.5-turbo
would be greatly appreciated. it's kind of wild to take the time to make an entire cookbook with no complete recipes in it that someone could just copy paste and run on their own.
I struggled for 3 hours today trying to get gpt-4 to implement any working example, and it failed over and over even after including the entire cookbook as context.
in the meantime, if anybody has any working examples of streaming the response to a chat completion prompt with the gpt-3.5-turbo model, you would be my hero. i'm also interested in what context i could have given gpt 4 such that it would be able to figure this out. Google's Bard also failed miserably, so it's not just a gpt thing.
thank you!
What would make this guide better or more discoverable?
https://github.com/openai/openai-cookbook/blob/main/examples/How_to_stream_completions.ipynb
this is the cookbook i'm talking about that i'd like to see improved.
1/ if there's anywhere in that guide that includes a code snippet i can copy paste, swap in my api key, and run, then please point me to it -- i would greatly appreciate it. so one way to improve the guide would be to add in an example program that the reader can run. that would be 10X more valuable than all of the space spent on timing differences.
2/ after failing to figure it out myself, when i copy pasted the the entire guide as context for gpt-4 and asked to help create an app that includes a streaming response with gpt-3.5-turbo, it was unable to do it even after multiple hours of back and forth. i then tried Bard, which is able to access the internet itself including links to all of the documentation, and it also failed to figure it out. so a second way to improve the guide would be to keep refining it until at least gpt 4 can understand it in one shot.
3/ this isn't an improvement to the guide, but if you personally have example code i can use to successfully figure out how to get this working, that would be greatly appreciated. i'm developing an app that lets users learn foreign languages via song lyrics line by line, but without the streaming responses it feels too slow.
thank you very much for your time and for listening!
Here you go:
# imports
import openai # for OpenAI API calls
# a ChatCompletion request
response = openai.ChatCompletion.create(
model='gpt-3.5-turbo',
messages=[
{'role': 'user', 'content': "What's 1+1? Answer in one word."}
],
temperature=0,
stream=True # this time, we set stream=True
)
for chunk in response:
print(chunk)
How do I know that a stream is over? is there a "stream.on('end')" or something? Im using Node with the 'stream' package.