gpt-engineer
gpt-engineer copied to clipboard
GPT4ALL support or open source models
OpenAI's model 3.5 breaks frequently and is low quality in general.
Falcon, Vicuna, Hermes and more should be supported as they're open source, free, and moving away from paid closed source is good practice and opens applications to huge user base who wants free access to these tools.
Since gpt4all now has a local server mode that emulates OpenAI’s API calls shouldn’t we be able to just overwrite ai.py’s calls with python calls to the gpt4all local server instead of openai? Or is it more complicated than that?
I think you are on the right track @teddybear082
See #63 for Abstraction of the ai.py file which allows for use of any model you desire. Just have to implement it.
gpt-llama claim to be a dropin replacement for chatGpt application. So is there a way to change the api url to local host?
Maybe this should be a discussion rather than an issue, feel free to start one if you are still interested.