opencommit icon indicating copy to clipboard operation
opencommit copied to clipboard

[Feature]: local model support

Open danil-iglu opened this issue 2 years ago • 10 comments
trafficstars

Description

In some organizations it is prohibited to send the code to 3rd party.

Suggested Solution

Support of dockerized llama-2 model that running locally?

Alternatives

No response

Additional Context

No response

danil-iglu avatar Aug 11 '23 10:08 danil-iglu

Is there any specific (and stable) setup you have in mind for running the model in docker? Then I will try to play around with that when time allows it, and try to get opencommit running against llama-2.

BR

malpou avatar Sep 07 '23 09:09 malpou

@malpou im constantly thinking about adding local llama support, this would be just killing.

i imagine e.g. setting oco config set OCO_MODEL=llama_2 and then opencommit switches to local llama out of the box. if the OCO_MODEL=gpt-4 then we continue to call openai api

i suggest taking most smart and most lightweight (so download time isnt more than ~20-30 sec), as the package is installed and updated globally once 2-3 months then waiting for 30 sec once in a while is ok (imo)

di-sukharev avatar Sep 07 '23 09:09 di-sukharev

Yes that's exactly my thought.

Haven't gotten around to playing with llama2 yet, is there a standard way to run it in docker, as far as I can see there is just multiple smaller projects. If you can point me in the right direction on what we would like to use for llama locally then I can do the rest of the implementation.

malpou avatar Sep 07 '23 09:09 malpou

I don’t know any setup, need to googleOn 7 Sep 2023, at 17:52, Malthe Poulsen @.***> wrote: Yes that's exactly my thought. Haven't gotten around to playing with llama2 yet, is there a standard way to run it in docker, as far as I can see there is just multiple smaller projects. If you can point me in the right direction on what we would like to use for llama locally then I can do the rest of the implementation.

—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>

di-sukharev avatar Sep 07 '23 12:09 di-sukharev

@di-sukharev I've found this project that I'll try to get running. And then see how it is to interface with.

https://github.com/go-skynet/LocalAI

malpou avatar Sep 07 '23 13:09 malpou

I would love to see local model support for this!

Edit: I've seen Simon Willison play around a ton with local models and although I don't have anything off the top of my mind I expect he'd have helpful blog posts to guide this feature

Edit 2: found this in my stars https://github.com/nat/openplayground for playing with LLMs locally...

pypeaday avatar Sep 28 '23 13:09 pypeaday

Me too!

Recently I came across with ollama implementation, and maybe would be helpful for you: https://ollama.ai/.

Edit: After checking your PR draft, LocalAI seems to be more robust, at least seems to have bigger community, so currently it's a good idea to keep that. Only if your issue wouldn't be fixed, this is a good alternative option to try.

Breinich avatar Oct 13 '23 21:10 Breinich

Stale issue message

github-actions[bot] avatar Nov 23 '23 21:11 github-actions[bot]

we now support Ollama

di-sukharev avatar Feb 28 '24 06:02 di-sukharev

@di-sukharev I tried with the AI_PROVIDER flag and not having any OPENAI key set, but the application errors out saying I need to have the key set. If I set the key to something arbitrary (i.e/, sk-blahblahblah...) it still seems to try to call out to OpenAI. Using v3.0.11

(Update: I see the issue. The documentation needs to be updated to state that you need to set OCO_AI_PROVIDER in the configuration file to ollama in order for it to work, not set AI_PROVIDER env var.)

gudlyf avatar Mar 04 '24 22:03 gudlyf