neoai.nvim icon indicating copy to clipboard operation
neoai.nvim copied to clipboard

Idea: Add option to use a local model like GPT4ALL

Open dhazel opened this issue 1 year ago • 5 comments

Thank you for the great plugin!

The option to use a local model like GPT4ALL instead of GPT-4 could make the prompts more cost effective to play with.

See codexplain.nvim for an example plugin that is doing this.

dhazel avatar May 25 '23 15:05 dhazel

This would be a great addition for the plugin :+1:

It would be better if the model is started externally and this plugin only communicates with it. codeexplain.nvim runs the model itself.

gerazov avatar May 27 '23 07:05 gerazov

hfcc.nvim has an interface for a hosted open assistant model at huggingface. It doesn't have as robust of a feature set so it would be great if huggingface chat could be leveraged with this plugin.

walkabout21 avatar Jun 25 '23 13:06 walkabout21

So there is a way to use llama.cpp with the openai api... if one could add a different URI for the openai endpoint we would be in business.... [https://www.reddit.com/r/LocalLLaMA/comments/15ak5k4/short_guide_to_hosting_your_own_llamacpp_openai/]

thegatsbylofiexperience avatar Aug 22 '23 07:08 thegatsbylofiexperience

I came here looking to see if this plugin could be used with llama.cpp.

Perhaps making this URL in openai.lua configurable would just work?

utils.exec("curl", {
        "--silent",
        "--show-error",
        "--no-buffer",
        "https://api.openai.com/v1/chat/completions",
        "-H",
        "Content-Type: application/json",
        "-H",
        "Authorization: Bearer " .. api_key,
        "-d",
        vim.json.encode(data),
    }

shnee avatar Aug 30 '23 18:08 shnee

This would be a great addition for the plugin 👍

It would be better if the model is started externally and this plugin only communicates with it. codeexplain.nvim runs the model itself.

agree

Budali11 avatar Jul 09 '24 07:07 Budali11