LiteLLM support
While I understand using gpt4 gives the best results, the landscape changes very quickly. Also some users have strict security requirements to run only local llms.
Instead of trying to support all variants of llms, it would be great if you could support LiteLLM, since it proxies many other LLMs. This will also future proof the platform as the landscape changes rapidly.
https://github.com/BerriAI/litellm
@clayrisser If cursor allows you to set OPENAI_API_BASE you would be able to use litellm proxy
Thanks for mentioning us, can we get on a call to learn how we can improve litellm for you ? (Sharing a link to my calendar)
I think it is already
You have to open the Cursor Settings (Ctrl + Ship + P : then type 'Cursor Settings')
Then open under Open AI "Configure Models" then open "Override OA base URL"
I think it is already
You have to open the Cursor Settings (Ctrl + Ship + P : then type 'Cursor Settings')
Then open under Open AI "Configure Models" then open "Override OA base URL"
This does not work! ..Tested with litellm, ollama, gpt4free api and llm studio.
how to add openai key in this? if you know please help me .
tried litellm, it works with the openai base url option if that url is publicly accessible (meaning localhost won't work), but it doesn't work perfectly, chat works but inline generation doesn't, I get something like this:
tried litellm, it works with the openai base url option if that url is publicly accessible (meaning localhost won't work), but it doesn't work perfectly, chat works but inline generation doesn't, I get something like this:
interesting.. 🤔

