Codestral (Mistral code suggestion)
Check for existing issues
- [X] Completed
Describe the feature
Support codestral from MistralAI as an equivalent of OpenAI.
Codestral support infill and VsCode plugins are already available.
https://mistral.ai/news/codestral/
Thanks!
If applicable, add mockups / screenshots to help present your vision of the feature
No response
Additionally, It'd be amazing if we could use this for inline_completions.
Don't be too excited. Codestral is terrible at doing FIM. I have switched to asking Sonnet 3.5 to just fill in the marked part, and it does the job 10x better, even though it is a chat model and not tuned for FIM at all. Codestral can't even match the parentheses right.
I could use Codestral model with private-gpt (fork from zylon-ai's private-gpt) in chat mode running in Docker with NVIDIA GPU support. So it would be cool if we could get it to work with zed locally.
I did a basic implementation that works: https://github.com/zed-industries/zed/pull/15573
A few outstanding questions as I don't know this code base very well.
FTR, my settings for codestral:
{
"language_models": {
"openai": {
"version": "1",
"api_url": "https://codestral.mistral.ai/v1",
"available_models": [
{ "custom": { "name": "codestral-latest", "max_tokens": 131072 } }
]
}
},
"assistant": {
"version": "2",
"default_model": {
"provider": "openai",
"model": "codestral-latest"
}
},
...
Note the different endpoint from regular mistral models.
Note the different endpoint from regular mistral models.
Can you also use codestral as an Ollama pull?
Note the different endpoint from regular mistral models.
Can you also use codestral as an Ollama pull?
I don't have the hardware.
Codestral is too large for my machine. I’m on an M1 Mac mini 16GB of RAM. However, other, smaller Ollama Pulls Work.
On Aug 13, 2024, at 11:04 PM, Étienne BERSAC @.***> wrote:
Note the different endpoint from regular mistral models.
Can you also use codestral as an Ollama pull?
I don't have the hardware.
— Reply to this email directly, view it on GitHub https://github.com/zed-industries/zed/issues/12519#issuecomment-2287923724, or unsubscribe https://github.com/notifications/unsubscribe-auth/AADS5FNUWA2W7IQ325YQRATZRLXNHAVCNFSM6AAAAABISF6WAWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDEOBXHEZDGNZSGQ. You are receiving this because you commented.
Codestal Fill In The Middle (FIM) works like a charm on vscode with continue.dev plugin Local ollama models such as starcoder are also light and interesting.
Currently Zed does not support other model or ollama models for code completion. Is this feature planned or does it depends on commercial agreements with AI providers ?
I have found that any model that can be pulled on Ollama works on Zed. The limitation is the user’s computer, memory, processor. Codestral is slow on my machine because I only have 16GB on my M1. Has anyone tried building a Linux AI server w tons of RAM and a GPU that you can remote into using the gateway method on Zed?On Aug 15, 2024, at 11:19 AM, vlebert @.***> wrote: Codestal Fill In The Middle (FIM) works like a charm on vscode with continue.dev plugin Local ollama models such as starcoder are also light and interesting. Currently Zed does not support other model or ollama models for code completion. Is this feature planned or does it depends on commercial agreements with AI providers ?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>
@kanelee they do work for assistant but how do you use a custom code completion (copilot) model ?
That I do not know. I am just learning how to use Zed. Have you tried the online assistant?
There is custom code completion using Maven + CoPilot. You can specify this in the settings json file.
This is not "custom", they are the only option available in zed at the moment My point is to use codestral for code completion
My apologies for the confusion. Meant to say that you can “customize” your assistant to either CoPilot or SuperMaven. I know that doesn’t help you. Have you put in a request on GitHub?
Well I beleive it is actually the main topic of the current issue. Check the title :)
Wow. Been making so many faux pas w my email responses today. Sorry 😂
I am also interested in this feature, to run FIM with a local model. Qwen2.5-Coder does also a good job at inline completion.
Apologies for resurrecting this issue discussion. Has there been any movement on adding Mistral alongside OpenAI and friends? #15573 seems to have done a lot of the heavy lifting on this already.
A new version of Codestral was just released and it's much better than the previous one.
I think Codestral and its inline completions should be a first class citizen in Zed.
@maxdeviant Not sure you should close this one as Codestral "code suggestion" is not exactly the same thing as Mistral in assistant panel. It is a "fill in the middle" alternative to supermaven for autocompletion.
See https://docs.mistral.ai/capabilities/code_generation/#fill-in-the-middle-endpoint
It is a completion provider implemented in Continue.dev on vscode for example
@maxdeviant Not sure you should close this one as Codestral "code suggestion" is not exactly the same thing as Mistral in assistant panel. It is a "fill in the middle" alternative to supermaven for autocompletion.
See https://docs.mistral.ai/capabilities/code_generation/#fill-in-the-middle-endpoint
It is a completion provider implemented in Continue.dev on vscode for example
Hello, so basically you're saying that we should be able to add edit_prediction_provider: "mistral" in the future?
@robikovacs Yes, Codestral to be precise
@robikovacs Can we reopen this issue ? By the way, despite previous changes, Mistral is still not available in the assistant panel
@robikovacs Can we reopen this issue ? By the way, despite previous changes, Mistral is still not available in the assistant panel
If you would like to see support for Mistral as a completion provider, please open a new issue.