cody
cody copied to clipboard
feedback: Support OpenLLM model on local server
Version
1.30
Areas for Improvement
- [ ] UI/UX
- [ ] Onboarding
- [ ] Docs
- [ ] Chat
- [ ] Commands
- [ ] Context
- [ ] Response Quality
- [X] Other
What needs to be improved? Please describe how this affects the user experience and include a screenshot.
Need support of local LLM Server (e.g. Llama 3.1 8B) running the model with OpenLLM.
Describe the solution you'd like to see
The Experimental OpenAI Compatible setting is not working properly. Appreciate supporting this configuration and document how to set it up.
Describe any alternatives that could be considered
No response
Additional context
No response