Neronjust2017

Results 11 comments of Neronjust2017
trafficstars

Hi, you can download the training and testing data on this page: http://niftyweb.cs.ucl.ac.uk/challenge/index.php. Hope it helps.

maybe I should ask Auto-GPT to analyse itself and figure out how to replace openai api and chat-gpt with local LLM models, lol.

> There's a ton of smaller ones that can run relatively efficiently. > Glance the ones the issue author noted. you are right. https://github.com/nomic-ai/pyllamacpp provides offilcal supported python bindings for...

> just replace the request to openai with your own models service in `llm_utils.py`. But the embedding part may need to keep using Openai's embedding api. Can I avoid using...

> > There's a ton of smaller ones that can run relatively efficiently. Glance the ones the issue author noted. > > [GPT4All](https://github.com/nomic-ai/gpt4all) | [LLaMA](https://github.com/facebookresearch/llama) > > LLaMA requires 14...

> If you want a conversational model, you should probably use Vicuna (based on llama). It supports the human and assistant roles (via string prefixes). > > Also, with llama.cpp...

> You need to use [llama.cpp](https://github.com/ggerganov/llama.cpp) (CPU-based) instead of FastChat (GPU-based). FastChat (the original) is more accurate because it operates in floating point, but it also needs much more RAM....

> > i posted this in another related thread but i got autogpt mostly working a couple times but embeddings seems to be the wall im hitting (hardcoded different embedding...

> Hi Thank you for your report. After the merge, the field `isSendToMiners` will be omitted, meaning it should always be set to `false`. Please use [V2 stats endpoints](https://docs.flashbots.net/flashbots-auction/searchers/advanced/rpc-endpoint#flashbots_getuserstatsv2). so...