go-openai
go-openai copied to clipboard
feat: Assistants streaming
Supersedes #731
Based on the original fork of @tanzyy96 just brought up to date.
We've been running this for a couple of weeks and has served us well.
Example usage:
stream, err = client.CreateThreadAndStream(ctx, openai.CreateThreadAndRunRequest{
RunRequest: openai.RunRequest{
AssistantID: AssistantID,
},
Thread: openai.ThreadRequest{
Messages: Messages,
},
})
defer stream.Close()
for {
resp, err = stream.Recv()
if errors.Is(err, io.EOF) {
break
}
}
Codecov Report
Attention: Patch coverage is 80.00000%
with 12 lines
in your changes missing coverage. Please review.
Project coverage is 98.24%. Comparing base (
774fc9d
) to head (c649497
). Report is 47 commits behind head on master.
Files with missing lines | Patch % | Lines |
---|---|---|
run.go | 80.00% | 6 Missing and 6 partials :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## master #737 +/- ##
==========================================
- Coverage 98.46% 98.24% -0.22%
==========================================
Files 24 26 +2
Lines 1364 1478 +114
==========================================
+ Hits 1343 1452 +109
+ Misses 15 14 -1
- Partials 6 12 +6
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
may i ask question?
how to talk in next context use last thread_id?
cc := openai.NewClientWithConfig(config)
stream, _ := cc.CreateRunStreaming(context.Background(), "thread_VEPTeIWj1umjdFyUd0Aj4lb2", openai.RunRequest{
AssistantID: "asst_IuAlZbLkuIgcky26QPB2TQy0",
})
//stream, _ := cc.CreateThreadAndStream(context.Background(), openai.CreateThreadAndRunRequest{
// RunRequest: openai.RunRequest{
// AssistantID: "asst_IuAlZbLkuIgcky26QPB2TQy0",
// },
// Thread: openai.ThreadRequest{
// Messages: []openai.ThreadMessage{
// {
// Role: openai.ThreadMessageRoleUser,
// Content: "我刚问了什么?",
// },
// },
// },
//})
defer stream.Close()
for {
resp, err := stream.Recv()
if errors.Is(err, io.EOF) {
break
}
t.Log("thread_id", resp.ID)
for _, content := range resp.Delta.Content {
t.Log(content.Text.Value)
}
}
may i ask question? how to talk in next context use last thread_id?
cc := openai.NewClientWithConfig(config) stream, _ := cc.CreateRunStreaming(context.Background(), "thread_VEPTeIWj1umjdFyUd0Aj4lb2", openai.RunRequest{ AssistantID: "asst_IuAlZbLkuIgcky26QPB2TQy0", }) //stream, _ := cc.CreateThreadAndStream(context.Background(), openai.CreateThreadAndRunRequest{ // RunRequest: openai.RunRequest{ // AssistantID: "asst_IuAlZbLkuIgcky26QPB2TQy0", // }, // Thread: openai.ThreadRequest{ // Messages: []openai.ThreadMessage{ // { // Role: openai.ThreadMessageRoleUser, // Content: "我刚问了什么?", // }, // }, // }, //}) defer stream.Close() for { resp, err := stream.Recv() if errors.Is(err, io.EOF) { break } t.Log("thread_id", resp.ID) for _, content := range resp.Delta.Content { t.Log(content.Text.Value) } }
There's an API for inserting message in a thread. Example below:
_, err = a.client.CreateMessage(ctx, threadId, openai.MessageRequest{
Role: openai.ChatMessageRoleUser,
Content: messageText,
})
if err != nil {
logger.Error("failed to create message", zap.Error(err))
return err
}
outStream, err = a.client.CreateRunStreaming(ctx, threadId, openai.RunRequest{
AssistantID: assistantId,
})
may i ask question? how to talk in next context use last thread_id?
cc := openai.NewClientWithConfig(config) stream, _ := cc.CreateRunStreaming(context.Background(), "thread_VEPTeIWj1umjdFyUd0Aj4lb2", openai.RunRequest{ AssistantID: "asst_IuAlZbLkuIgcky26QPB2TQy0", }) //stream, _ := cc.CreateThreadAndStream(context.Background(), openai.CreateThreadAndRunRequest{ // RunRequest: openai.RunRequest{ // AssistantID: "asst_IuAlZbLkuIgcky26QPB2TQy0", // }, // Thread: openai.ThreadRequest{ // Messages: []openai.ThreadMessage{ // { // Role: openai.ThreadMessageRoleUser, // Content: "我刚问了什么?", // }, // }, // }, //}) defer stream.Close() for { resp, err := stream.Recv() if errors.Is(err, io.EOF) { break } t.Log("thread_id", resp.ID) for _, content := range resp.Delta.Content { t.Log(content.Text.Value) } }
There's an API for inserting message in a thread. Example below:
_, err = a.client.CreateMessage(ctx, threadId, openai.MessageRequest{ Role: openai.ChatMessageRoleUser, Content: messageText, }) if err != nil { logger.Error("failed to create message", zap.Error(err)) return err } outStream, err = a.client.CreateRunStreaming(ctx, threadId, openai.RunRequest{ AssistantID: assistantId, })
thanks very much
合并进去了吗?现在很需要这个
@sashabaranov
Hi Sasha,
What is needed to get this one through review ?
I believe it's the most straightforward of the streaming implementations and will give a good base for further development.
Can confirm that has been extensively ran with production workloads for a while by us.
i have use office openai sdk https://github.com/openai/openai-go/blob/main/api.md