autopilot icon indicating copy to clipboard operation
autopilot copied to clipboard

Support A local module such as gpt4all2, dolly2 or ChatGLM

Open hatkyinc2 opened this issue 2 years ago • 2 comments

Why Users don't want to send their code to OPENAI

What Allow users to connect to different models. Maybe asking users to host the model and provide an API?

For example

Support gpt4all https://github.com/nomic-ai/gpt4all

https://python.langchain.com/en/latest/modules/models/llms/integrations/gpt4all.html

For use on "secret"/proprietary code with fewer security concerns.

hatkyinc2 avatar Apr 07 '23 03:04 hatkyinc2

Does llama only have 2k token limit?

Charuru avatar Apr 07 '23 10:04 Charuru

yea seems like 2,048 tokens from a skim might be harder to do, but if we can do this, the rest would work for sure. Also, the cost is close to 0 and high speed hopefully as 0 networking, dedicated local resources, so running unlimited number of smaller queries

It's aspirational. IDK if something else comes along that is open to running locally The goal is for running local something so as not to send source code to central companies Don't care what the final model and such would be as long as it would be able to do the job.

hatkyinc2 avatar Apr 08 '23 01:04 hatkyinc2