go-openai
go-openai copied to clipboard
OpenAI ChatGPT, GPT-3, GPT-4, DALL·E, Whisper API wrapper for Go
See issue: https://github.com/sashabaranov/go-openai/issues/186 Add helpers in order to reduce boilerplate code in the tests. To test this PR run the tests: `go test ./...`.
The content of each request and response is incomplete. How to solve this problem
I m looking to way to reset current dialog context with ChatGPT I've found a way to do it using openAI Python library ``` response = openai.Completion.create( engine="davinci", prompt="Hello, how...
Reading the Open AI docs on [Chat completion](https://platform.openai.com/docs/guides/chat/managing-tokens) they specifically call out counting tokens on input (as you are billed on these as well as the amount of tokens on...
Added word-level timestamp granularity support as outlined in the OpenAI API documentation for the transcription endpoint. ref: https://platform.openai.com/docs/api-reference/audio/createTranscription#audio-createtranscription-timestamp_granularities 
**Describe the change** Add new struct field `timestamp_granularities[]` and `words` for whisper transcatiption API **Provide OpenAI documentation link** * [timestamp_granularities[]](https://platform.openai.com/docs/api-reference/audio/createTranscription#audio-createtranscription-timestamp_granularities) * [words](https://platform.openai.com/docs/api-reference/audio/verbose-json-object#audio/verbose-json-object-words) **Describe your solution** Add OpenAI new added parameter...
Your issue may already be reported! Please search on the [issue tracker](https://github.com/sashabaranov/go-openai/issues) before creating one. **Describe the bug** A clear and concise description of what the bug is. If it's...
In chat.go and completion.go and edits.go temperature and top P are defined with json modifier omitempty: ``` chat.go: Temperature float32 `json:"temperature,omitempty"` chat.go: TopP float32 `json:"top_p,omitempty"` completion.go: Temperature float32 `json:"temperature,omitempty"` completion.go:...
When using go sdk requests /completions, finish_reason is "" instead of null request vllm directly ``` ... data: {"id":"cmpl-096176162ed84f0e85e6b5aece29f27b","object":"chat.completion.chunk","created":8209815,"model":"public/qwen1-5-72b-chat-int4@main","choices":[{"index":0,"delta":{"content":""},"finish_reason":null}]} data: {"id":"cmpl-096176162ed84f0e85e6b5aece29f27b","object":"chat.completion.chunk","created":8209815,"model":"public/qwen1-5-72b-chat-int4@main","choices":[{"index":0,"delta":{"content":"\n"},"finish_reason":null}]} data: {"id":"cmpl-096176162ed84f0e85e6b5aece29f27b","object":"chat.completion.chunk","created":8209815,"model":"public/qwen1-5-72b-chat-int4@main","choices":[{"index":0,"delta":{"content":""},"finish_reason":"stop"}],"usage":{"prompt_tokens":12,"total_tokens":55,"completion_tokens":43}} ``` Request vllm through go SDK ```...
is there any support for JSON mode in vision preview? i've tried to get it working with `GPT4Turbo1106` as well but it does not work. below is my example ```...