open-interpreter
open-interpreter copied to clipboard
Litellm custom provider
Describe the changes you have made:
A suggestion to solve the api_key issue... it isn't litellm that requires an api_key it's openai.completion() when not passing api key I get both traceback and the output of the local llm
Reference any relevant issues (e.g. "Fixes #000"):
Pre-Submission Checklist (optional but appreciated):
- [ ] I have included relevant documentation updates (stored in /docs)
- [x] I have read
docs/CONTRIBUTING.md - [x] I have read
docs/ROADMAP.md
OS Tests (optional but appreciated):
- [x] Tested on Windows
- [ ] Tested on MacOS
- [ ] Tested on Linux
The api_key must be set https://github.com/openai/openai-python/blob/15488ce07cb97535d8564e82dd5cda3481bc1a81/src/openai/_client.py#L97C12-L97C20 Anything using openai format needs the api_key to be not None We should default it to an empty string or "dummy_key"
I think this breaks something in terminal interface... Fixed
This is such a bizarre problem that I think stems from LiteLLM relying on OS env vars (which I encouraged them to do early on!) so we can never really know if an API key has been set.
I think we shouldn't expose custom_llm_provider to OI users— I'm happy to simplify / be opinionated on the fact that if an API base is set, it needs to be OpenAI compatible. My understanding of LiteLLM is that prepending "openai" to the model name simply sets the custom_llm_provider to OpenAI, but it's probably better to use custom_llm_provider under the hood so the model name is accurate. I like model being = "local"! Instead of "openai/x" or whatever we had before.
I'm happy to merge if this is minified to just set "custom_llm_provider" to "openai", in that place where we would otherwise prepend "openai/" to the model name, and set model to "local" if it's not already set (I'm also happy to do this).
I'll also talk to LiteLLM about letting us know when an API key isn't set (so we can pass a "dummy_key" like you suggest!).
The api_key must be set: https://github.com/openai/openai-python/blob/15488ce07cb97535d8564e82dd5cda3481bc1a81/src/openai/_client.py#L97C12-L97C20 Anything using openai format needs the api_key to be not None or ""
interpreter.complete(**params)
litellm.complete(**params)
openai.complete(**params)
This is the basic flow, and it doesn't really matter where the api key is set. I'm unsure what is the best way to solve this, i think I have a solution, but we'll discuss this tomorrow
['openai', 'custom_openai', 'text-completion-openai', 'cohere', 'anthropic', 'replicate', 'huggingface', 'together_ai', 'openrouter', 'vertex_ai', 'palm', 'gemini', 'ai21', 'baseten', 'azure', 'sagemaker', 'bedrock', 'vllm', 'nlp_cloud', 'petals', 'oobabooga', 'ollama', 'ollama_chat', 'deepinfra', 'perplexity', 'anyscale', 'mistral', 'maritalk', 'voyage', 'cloudflare', 'xinference', 'custom']
available providers ^^
This is such a bizarre problem that I think stems from LiteLLM relying on OS env vars (which I encouraged them to do early on!) so we can never really know if an API key has been set.
I think we can use litellm.validate_api_key to test if it will crash and then set the key to "dummy_key" and validate again. If it passes all good, if it still fails it is an invalid key @KillianLucas
I believe this is ready for merging now @KillianLucas
Hey @Notnaton, this looks mostly good but again, I think we shouldn't expose custom_llm_provider to OI users.
Will merge if this is minified to just set "custom_llm_provider" to "openai" and set the local model to "local" instead of "openai/x"—both great things this PR does. but I'm happy to minify it just before the next update as well.
https://github.com/KillianLucas/open-interpreter/issues/964
@KillianLucas I have removed a bunch of unnecessary things Hopefully this aligns with:
minified to just set "custom_llm_provider" to "openai"
It checks in init if it is model is set to a compatible provider, if not, it set the format to openais.
actually I think this might break custom Ollama stuff, and Azure with a custom endpoint! because those don't use the openai provider, but people do specify API base. I remember now that was the reason we added those conditionals like if model starts with "ollama", etc.
So I think we do need to keep those conditionals, but I do think it's smart to just set the custom_llm_provider to openai instead of prepending /openai.
I'm going to merge it then make that change, you've done enough on this lol! Thank you so much Anton for everything.