neoai.nvim
neoai.nvim copied to clipboard
Idea: Add option to use a local model like GPT4ALL
Thank you for the great plugin!
The option to use a local model like GPT4ALL instead of GPT-4 could make the prompts more cost effective to play with.
See codexplain.nvim for an example plugin that is doing this.
This would be a great addition for the plugin :+1:
It would be better if the model is started externally and this plugin only communicates with it. codeexplain.nvim runs the model itself.
hfcc.nvim has an interface for a hosted open assistant model at huggingface. It doesn't have as robust of a feature set so it would be great if huggingface chat could be leveraged with this plugin.
So there is a way to use llama.cpp with the openai api... if one could add a different URI for the openai endpoint we would be in business.... [https://www.reddit.com/r/LocalLLaMA/comments/15ak5k4/short_guide_to_hosting_your_own_llamacpp_openai/]
I came here looking to see if this plugin could be used with llama.cpp.
Perhaps making this URL in openai.lua configurable would just work?
utils.exec("curl", {
"--silent",
"--show-error",
"--no-buffer",
"https://api.openai.com/v1/chat/completions",
"-H",
"Content-Type: application/json",
"-H",
"Authorization: Bearer " .. api_key,
"-d",
vim.json.encode(data),
}
This would be a great addition for the plugin 👍
It would be better if the model is started externally and this plugin only communicates with it. codeexplain.nvim runs the model itself.
agree