Llama 3.1 8B support
Is your feature request related to a problem? Please describe. Would very much like Llama 3.1 8B (or ideally 70B) support natively.
Describe the solution you'd like There is an entry for Llama 3.1 and I can toggle it on.
Additional context Big fan of Cursor, thanks for everything 🫶
Maybe add Ollama integration for this to happen
Using the override openai URL, it's returning this error: "cursor llama-3.1-8b-instant` must be less than or equal to 8000"
Any way to limit the max content to 8000?
I think Ollama support would be ideal. That way it's quick because it's local, and it also helps with privacy.
Ollama support would be great, it runs on localhost:11434 I think already so it would be simple to mock that in as a type of repository url to use
If everybody will use llama locally, who will pay for the Cursor's subscription?