Merlin von Trott
Merlin von Trott
I think you might just need to put the model in --model "anthropic/claude-3-haiku". It works for me on ubuntu. 
Also with openrouter 
Here is the documentation on OpenRouter about multi-modal models. https://openrouter.ai/docs#images-_-multimodal-requests
Sorry my mistake ... litellm already implements this. I should have just omitted the api_base: https://openrouter.ai/api/v1/chat/completions
Open Router uses Litellm to serve the models, if that would be implemented, it would give us the ability, to use all kinds of models Local/Cloud Providers (Openai, Anthropic, Together,...