ChatGPT.nvim
ChatGPT.nvim copied to clipboard
Groq is too fast
The output from https://console.groq.com gets cut off due to the speed of inference.
How do you setup groq with this plugin?
using environment variables OPENAI_API_HOST and OPENAI_API_KEY
how you have set model ?
Personnaly I ended up making a quick and dirty docker compose to run litellm and highjack request. That allowed me to use Claude sonnet 3.5 as well as online searching models.