Franci Penov
Franci Penov
I definitely like this better than my hack. :-) But can you update the requirements.txt, and also the readme.md with instructions how to setup llama.cpp to work with the Python...
Nevermind, I should not read stuff before I drink my coffee. I'll look into merging this tonight.
Resolved the conflicts and will merge now. This will temporarily continue to use OpenAI embeddings while I figure out quick way to resolve this.
Yes, this is an issue we are working right now.
If llama-cpp python bindings have an option to control the visibility of the processing, you can play with that. Shouldn't be that hard to add an .env variable to control...
I'll take a PR (hint, hint :-p )
I was building something way simpler with Gradio, but hey if this is up and running, by all means we can merge it for folks to use.
We have a Discord, but no Wiki (yet). ping me on Twitter for the link (we are working on fully opening it soon, just still organizing stuff)
I'd love this. There's two approaches: 1. You take your extension code and put in a subfolder in tools with a simple README how to install and run it. :-)...
it's simple. You only need like twenty steps or so. :-) 1. Copy extensions/pipnecone_store.py and change the logic to use your vector db 2. Copy the code that replaces chroma...