alxspiker

Results 37 comments of alxspiker

The best! Would like to use a local model which I know I can edit in the code myself, but I see you are adding support so I will wait.

> (privategpt) root@alienware17B:/home/rex/privateGPT# python privateGPT.py llama.cpp: loading model from ./models/ggml-model-q4_0.bin llama.cpp: can't use mmap because tensors are not aligned; convert to new format to avoid this llama_model_load_internal: format = 'ggml'...

If you are using the most recent branch try again as the tokens have been increased from 512 to 1000. I noticed when it produces chinese text, it takes a...

> > 是这样子的,langchain不太能处理中文的内容 > > not exactly, it depends on what embedding model you choosed. That actually makes a lot of sense, if you could somehow even just prompt the...

I find it's more useful to train the AI against your views so that you can prove it works. If you don't like NATO, change the text to be ANTI-NATO,...

Figured it out! I use alpaca7b, which I assume is very similar to GPT4All. To test this, I created a text document named "ai.txt" and inside added: ``` My name...

If you are talking about the ingest file, try editing the text document to something like "My name is Nick". Ingestion should be pretty quick on it. Made a version...

> @alxspiker feel free to pr your ingest.py to my [repo](https://github.com/su77ungr/CASALIOY). I try to speed up both by utilizing qdrant and llama native models. Already getting to

> why is it nightmare? your env? or handling docs. > > Sounds good, see you over there. Currently wrapping up my ingest.py that iterates through the document dir and...

> Great work @su77ungr @alxspiker ! Feel free to PR your changes to this repo if you feel like it, or just share your results here. > > My initial...