Mistral Models from Late 2025
As an OVHcloud customer using AI Endpoints I want to be able to use Mistral models shortly after their release. As of December 10, 2025, this means being able to use Mistral 3 small models (14B, 8B, and 3B), Mistral Large 3 MoE (41B on 675B), Devstral 2 (123B), and Devstral Small 2 (24B) so that I can rely on OVHcloud AI Endpoints to use SOTA EU AI stack for my projects.
Hi, Thanks for your feedback. We are working to streamline our process in order to shorten integration time, with the context of our infrastructure. FYI, concerning Devstral Small 2 (24B), OVHcloud is "not authorized to exercise any rights under this license" (company exceeds $20 million revenue).
Thank you for your quick feedback. I didn't notice the license issue.
Just in case, Devstral Small 2 (24B) is definitely released under the Apache 2.0 license, according to the press release and the dedicated Hugging Face page, which doesn't authorize such restrictions.
However, Devstral 2 (123B), the biggest one, is released under “a modified MIT license,” which indeed includes such a restriction:
You are not authorized to exercise any rights under this license if the global consolidated monthly revenue of your company (or that of your employer) exceeds $20 million (or its equivalent in another currency) for the preceding month. This restriction in (b) applies to the Model and any derivatives, modifications, or combined works based on it, whether provided by Mistral AI or by a third party. You may contact Mistral AI ([email protected]) to request a commercial license, which Mistral AI may grant you at its sole discretion, or choose to use the Model on Mistral AI's hosted services available at https://mistral.ai/.
It's a shame Mistral AI is going this path, but thank you for providing other free (as in free speech) and open-weight models.