obsidian-copilot
obsidian-copilot copied to clipboard
Implement Mistral Platform
Is it possible to add new "Mistral Platform" as a separate option in the models dropdown . Can you specify the actual model in the settings with the 3 endpoints (tiny, small, medium)
Website Docs : https://console.mistral.ai
@bfoujols I'm also looking at it, just got access to the medium model myself. Added to the roadmap!
Thanks Logan, I'm keen for this too. I believe that because Mistral implement the openai api formating, you can just straight up replace 'https://api.openai.com/v1' with 'https://api.mistral.ai/v1'; add an everything should work, from looking for models to getting completions. thanks again,
@peterlionelnewman yes, so in theory we can just put Mistral API key in the OpenAI key field and do this override in advanced setting
But then it has CORS issue
Claude 3 also has this issue and I used a local proxy server to get around it. Will add Mistral with this soon too.
On mac devices, the command below works for me. OLLAMA_ORIGINS="*" ollama serve
Bump!
@knuurr have you tried adding Mistral as a custom model? https://www.obsidiancopilot.com/en/docs/settings#adding-custom-models
@logancyang actually I did test it just now.
My requests end with 422 error.
This is the setup I'm using. I filled fields for ilustration purpose, but model is set up this way.
This is chat window:
This is from Network Tab. I think it's ok to ignore vector errors as I disable vault RAG.
I tested some other configuration - either using base URL or full url like /v1/chat/completions but I can't get it working.
Not to mention i use another desktop chat app for other purposes, Chatbox, where I also created custom model (Mistral) and all seems to work.
Do you maybe have any clue? Obsidian integration would be great for my use case.
@logancyang事实上我刚刚确实测试过了。
我的请求以 422 错误结束。
这是我正在使用的设置。我填写了字段以方便说明,但模型是这样设置的。
这是聊天窗口:
这是来自网络选项卡。我认为可以忽略矢量错误,因为我禁用了 Vault RAG。
我测试了一些其他配置 - 要么使用基本 URL 要么使用完整 URL,
/v1/chat/completions但我无法使其工作。更不用说我将另一个桌面聊天应用程序 Chatbox 用于其他目的,我也在其中创建了自定义模型(Mistral)并且一切似乎都正常。
你可能有什么线索吗?Obsidian 集成对我的用例来说非常有用。
+1
I can repro, even with CORS on it doesn't work. Mistral's API isn't OpenAI compatible as they claim? I've been using Mistral models via OpenRouter though.
Please add the Mistral-large model option.
Confirming I tested Mistral today as an OpenAI compatible model trying combos of endpoints: https://api.mistral.ai/v1/chat/completions https://api.mistral.ai/v1/chat https://api.mistral.ai/v1
Model name: mistral-large-latest
Reults in " Error: Connection error." in chat Developer Tools > Network says:
Then I thought perhaps since I'm testing with FREE models, they are limited. All model names: https://docs.mistral.ai/getting-started/models/models_overview/ The free model listed is: pixtral-12b-2409 Page here mentions the correct endpoint for that free model: https://mistral.ai/news/pixtral-12b/#la-plateforme
Tried with CORS set to "true" and "false"...
Still no luck... ☹️ Going to post in the Mistral Discord
@jhmonroe we are using langchain openai client and it's not working with Mistral out-of-the-box.
But this PR introduced the mistral client from langchain which should be working (?) https://github.com/logancyang/obsidian-copilot/pull/841/files Just waiting for him to update it to be able to merge.
@jhmonroe we are using langchain openai client and it's not working with Mistral out-of-the-box.
But this PR introduced the mistral client from langchain which should be working (?) https://github.com/logancyang/obsidian-copilot/pull/841/files Just waiting for him to update it to be able to merge.
exciting! will stay tuned. we are at a very exciting moment... this week have been experimenting with cursor and cline (for vscode)... amazing how much they can do now with agentic abilities. will be crazy when agents are in obsidian 🤯
Being able to use the embedding one would be nice too.



