Mark Schmidt
Mark Schmidt
I'm running it on x64 right now.
To be clear, the x86 architecture for gpt4all should really be called x86/x64. It supports either. But none of the gpt4all libraries are required to run inference with gpt4all. They...
Ooba's UI is a lot of overhead just to send and receive requests from a different model. AFAIK ooba supports two types of models, HuggingFace models and GGML (llama.cpp) models...
> well in the meantime i think i'll fork it to use llama instead, i got gpt4 access but i like the idea of being able to let it run...
With minimal finetuning LLaMA can easily do **better** (yes better\*) than GPT-4. Finetuning goes a long way and LLaMA is a very capable base model. The Vicuna dataset (ShareGPT) is...
> @alkeryn I managed to perform this using sentence_transformers library. This appears to work for Vicuna and pinecone, but you have to change your index dimensions from 1536 to 768...
Right, it would have to be re-implemented specifically for Auto-GPT. I just thought I'd point out that it is a future possibility. I suppose local embeddings is a separate issue...
Neither of us can read @Torantulino's mind. But if you're right and people want this functionality as much as I suspect they do, either a wave of enthusiastic support for...
> Been following this thread while I implement local models in babyagi, but just wanted to pop in and voice my desire to see local models in this project, too....
OpenAI's node.js library has a [basePath option](https://github.com/openai/openai-node/blob/master/configuration.ts#L70). Here are some issues from the OpenAI node.js repo discussing using this option with proxies and alternative endpoints: https://github.com/openai/openai-node/issues/85#issuecomment-1477432919 https://github.com/openai/openai-node/issues/53#issuecomment-1426254063