OpenAI-DotNet
OpenAI-DotNet copied to clipboard
Assistants Beta V2
OpenAI has released a new version of the Assistants, which significantly changes the API surface.
Subtasks:
- Add support for GPT-4o
- #267
- #284
Great to see this stuff making its way into the code!
Just a quick look at the create run docs, there are a few more still missing...
additional_instructions Appends additional instructions at the end of the instructions for the run. This is useful for modifying the behavior on a per-run basis without overriding other instructions.
additional_messages Adds additional messages to the thread before creating the run.
top_p An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
max_prompt_tokens The maximum number of prompt tokens that may be used over the course of the run. The run will make a best effort to use only the number of prompt tokens specified, across multiple turns of the run.
max_completion_tokens The maximum number of completion tokens that may be used over the course of the run. The run will make a best effort to use only the number of completion tokens specified, across multiple turns of the run
truncation_strategy Controls for how a thread will be truncated prior to the run. Use this to control the intial context window of the run.
response_format Specifies the format that the model must output.
What is the best way to track these also? I can have a go at implementing some of them
I wasn't planning on tracking these individually.
I wonder if there is some way we could consume the OpenAI API spec so that when new properties get released it can be partially automated?
https://github.com/openai/openai-openapi/blob/master/openapi.yaml
I wonder if there is some way we could consume the OpenAI API spec so that when new properties get released it can be partially automated?
https://github.com/openai/openai-openapi/blob/master/openapi.yaml
I already have something locally that I use, but it's not good enough for all the edgecases.
But yes I do use that repository spec to generate out code for this library.
Hi @StephenHodgson
Thank you for the great library!
OpenAI has released gpt-4o model and it's not compatible with Assistants v1 API. Throwing error:
{ "message": "The requested model 'gpt-4o' cannot be used with the Assistants API in v1. Follow the migration guide to upgrade to v2: https://platform.openai.com/docs/assistants/migration.", "type": "invalid_request_error", "param": "model", "code": "unsupported_model" }
Is it possible to generate a library with Assistants v2?
Sorry I haven't had time to get around to it.
I'll see what I can do this weekend.
I'm also happy to help but I might need a quick kick off with you on the best way to approach some of these changes. I'll ping you on Discord
Most of the work is done. Just finishing up streaming support and fixing a few bugs with tool cache
Большая часть работы сделана. Просто завершаю поддержку потоковой передачи и исправляю несколько ошибок с кешем инструментов.
Hello! When can we expect GPT-4o support? Considering that most of the work has already been completed.