Free-Auto-GPT
Free-Auto-GPT copied to clipboard
Run with Local LLM Models
We tried many local models like LLAMA, VICUNA, OPENASSIST, GPT4ALL in their 7b versions. None seem to give results like the CHATGPT API.
we would like to try to test new models, which can be loaded in a maximum of 16gm of RAM, to allow accessibility to anyone without discrimination.
Any advice for LLM models with fine tuning for high performance instructions?