Patrick Oliver Bustamante
Patrick Oliver Bustamante
I've tried it, the CPU version is painfully slow but if youre in luck to have used the GPU Session, you can run it with ease, @SDpedrovilaplana Explained the steps...
This project is open source and you can read if it sends some of your data outside easily. As for the huggingface part, its just required when you download the...
> @psychopatz : you might want to patch [`private_gpt/settings/settings.py:117`](https://github.com/zylon-ai/private-gpt/blob/main/private_gpt/settings/settings.py#L117) to add your prompt settings "llama3". Then update your `settings.yaml`, section `prompt_style` accordingly. Thanks, I implemented the patch already, the problem...
just delete the local_data did the trick for me
of course it will not read images since it just extracts the text from the document files, your best bet is to ocr the images before ingesting. privategpt doesnt support...
if your backend uses python language like fastapi, you can use their SDK and put the contents into a variable array then looping it or you can do the old...
You can check the API usage format on the [Swagger Endpoint](http://localhost:8001/docs), just put it on the postman then convert to Curl
If you want it badly for windows only, just use WSL for that
Its currently implemented by using tags function but yeah it can be improved.
It is possible by autoingesting the user's conversation using the [ingest api](https://docs.privategpt.dev/api-reference/api-reference/ingestion/ingest-text) and grabbing their doc id to be stored on your database. When chatting, just pass the [context_filter](https://docs.privategpt.dev/api-reference/api-reference/contextual-completions/chat-completion-v-1-chat-completions-post-stream) given...