Merlin von Trott

Results 5 comments of Merlin von Trott

I think you might just need to put the model in --model "anthropic/claude-3-haiku". It works for me on ubuntu. ![Screenshot from 2024-05-10 07-39-58](https://github.com/OpenInterpreter/open-interpreter/assets/33913822/e155346d-81cc-432c-b2db-ca391732f1d9)

Also with openrouter ![Screenshot from 2024-05-10 07-43-11](https://github.com/OpenInterpreter/open-interpreter/assets/33913822/363c0b57-be08-49a5-a2af-485e45ce8965)

Here is the documentation on OpenRouter about multi-modal models. https://openrouter.ai/docs#images-_-multimodal-requests

Sorry my mistake ... litellm already implements this. I should have just omitted the api_base: https://openrouter.ai/api/v1/chat/completions

Open Router uses Litellm to serve the models, if that would be implemented, it would give us the ability, to use all kinds of models Local/Cloud Providers (Openai, Anthropic, Together,...