add get_models for mistral. like how ollama get_models work
Description
Call mistral v1/models to get up to date list of models. And check which capabilities they have
Screenshots
TODO
Proof of concept work, But there are still thing i need to work on
- [x] Does mistral use same api for models as openai? no it does not expose capabilties
- [x] Mistral v1/models has lots of duplicates, that removed via dedup function. is bit ugly. I am happy with it now
- [x] Manual remove models with ocr, embed, because i have tested they won`t work for chat. Is there documentation which models should work with chat? Can this be implemented in cleaner way?
- [x] Copied logic, from olama get_models. Double check its logic. doe we need all of it. can we reuse components
- [x] How to pick good default model: Should keept outside merge request
- [x] cleanup: remove
env.models_endpointoverkill - [x] filter out deprecated models
- [x] cleanup: model opts like
visionandcan_use_toolsare set after setup. This might have downsides. For example: you either never get the error message like"The image Slash Command is not enabled for this adapter"because it thinks all models can use vision. because it uses opts.vision. set before setup.
might be worth it to solve it likeget_models.check_thinking_capability - [x] check contribution guidelines
- [x] Add test
- [x] Update documentation
Checklist
- [x] I've read the contributing guidelines and have adhered to them in this PR
- [x] I've added test coverage for this fix/feature
- [x] I've run
make allto ensure docs are generated, tests pass and my formatting is applied - [ ] (optional) I've updated
CodeCompanion.hasin the init.lua file for my new feature - [x] (optional) I've updated the README and/or relevant docs pages
I'll be fine to accept this PR, but we need to be able to switch off reasoning and function calling if the models don't support it, like we do with Copilot. The reason I've not allowed this in the OpenAI adapter thus far is because the models endpoint was so primitive.
Yeah, I my merge request I can use capabilities. to check if it support vision and tool calling. Some models should also support reasoning. but that is not visible in the api.
{
"id": "voxtral-mini-latest",
"object": "model",
"created": 1760349603,
"owned_by": "mistralai",
"capabilities": {
"completion_chat": true,
"function_calling": false,
"completion_fim": false,
"fine_tuning": false,
"vision": false,
"ocr": false,
"classification": false,
"moderation": false,
"audio": false
},
"name": "voxtral-mini-transcribe-2507",
"description": "A mini transcription model released in July 2025",
"max_context_length": 16384,
"aliases": [
"voxtral-mini-transcribe-2507",
"voxtral-mini-2507"
],
"deprecation": null,
"deprecation_replacement_model": null,
"default_model_temperature": 0.0,
"type": "base"
}
I wanted to check if openai worked the same way. But they don't have capabilities field.
{
"id": "gpt-5-pro-2025-10-06",
"object": "model",
"created": 1759469707,
"owned_by": "system"
},
I am relatively happy with how the code is now: But I still have some question comments about it:
-
I don't know a smart and easy way to pick the best default Mistral Model. So I kept it the same
mistral-small-latestBut maybedevstral-small-latestordevstral-medium-latestwould be a better fit -
will be fixed by #2299: A weakness for the current setup is that: when a User picks a model: the user can't easily see what what features the model supports. Might be a idea for the future to add icons to indicate if the model/adapter support's vision, tool calling, streaming and reasoning?, Things as context window
-
Just a idee I did not generate a custom error message if
env.api_keywas not set. I Could add if you want?Maybe an idea for later.: you could modify env_replaced so it supports:
env = { api_key = "manditory: MISTRAL_API_KEY" }That would generate a generic error message if key is not set. The error message could include which env. key was missing for which adapter. and could reference to the documentation for how to set it. or even prompt for it.
-
I see a small problem with how copilot and now mistral expose the capabilities like vision: If you try to use
/imagefor adapter that does not support vision likedeepseekyou get the warning:The image Slash Command is not enabled for this adapter
But if the adapter supports it but the model does not. likecopilotwith03-miniYou don't get any warnings. -
No longer revevan after #2278: After I ran
make docit did not only add my changes from doc/usage/chat-buffer/tools.md to doc/codecompanion.txt it also removed some special symbols. Don't know if this is intended?
This PR is stale because it has been open for 30 days with no activity.