open-interpreter
open-interpreter copied to clipboard
How to configure the openai API proxy endpoint?
Is your feature request related to a problem? Please describe.
Hi, how to set the api proxy api instead of the official open api address? could i place it into .env? what's the ENV name?
Describe the solution you'd like
How to configure the openai API proxy endpoint?
Describe alternatives you've considered
No response
Additional context
No response
You can use the --api_base https://host.com/v1
Or you can edit the config:
interpreter --config
Add api_base: "https://host.com/v1"
@xjspace let me know if this solves your question.
Notnaton, strangely when I did this Openai still appears in the model name when running code. But I have been trying to run code llama through huggingface: this is the line I'm referring to: Model: openai/huggingface/codellama/CodeLlama-34b-Instruct-hf
Interpreter Info
Vision: False
Model: openai/huggingface/codellama/CodeLlama-34b-Instruct-hf
Function calling: None
Context window: 3000
Max tokens: 400
Auto run: False
API base: https://api-inference.huggingface.co/models/codellama/CodeLlama-34b-Instruct-hf
Offline: False
This is because we add it, so litellm uses openai format to communicate with the endpoint. There is a change coming up to not do this anymore, next update?
https://github.com/KillianLucas/open-interpreter/pull/955