Anton Solbjørg

Results 209 comments of Anton Solbjørg

You can use the `--api_base https://host.com/v1` Or you can edit the config: `interpreter --config` Add `api_base: "https://host.com/v1"` @xjspace let me know if this solves your question.

This is because we add it, so litellm uses openai format to communicate with the endpoint. There is a change coming up to not do this anymore, next update?

https://github.com/KillianLucas/open-interpreter/pull/955

@maxpetrusenko please post your config.yaml `interpreter --config`

@maxpetrusenko you need to delete your config.yaml file and let it generate again. go to where Open-interpreter is and delete it Windows: It would typically be something like `C:\Users\[Username]\AppData\Local\Open Interpreter\Open...

@skywalk163 Is this still an issue? There has been many updates since 0.1.3 please run `pip install --upgrade open-interpreter` Can anyone confirm it's fixed or not fixed?

This [openai v0.28.1] is required by litellm: https://github.com/BerriAI/litellm/blob/85932ac247343119719a857808d4398d63c085b2/poetry.lock#L637C7-L637C7 Here is the issue in litellm: https://github.com/BerriAI/litellm/issues/774

Hi, could you run this to check. This will help narrow down what the issue is. 1. Start LM Studio 2. Load model 3. Start server Now, copy the curl...

> In options, this option is the only one that is off: Cross-Origin-Resource-Sharing (CORS) Turn that one on and try with interpreter again

How are you running interpreter? Is it in a virtual environment, like conda? It seems like the curl command went through, so the server is running. But for some reason...