bmaltais
bmaltais
Do you mean by updating an already indexed document, or by adding new ones? Adding new ones work and will just index the added doc. I have not tried updating...
Adding new documents to the input folder will trigger indexing for those new documents. However, it will not index existing ones. Be aware that existing communities might get re-generated each...
> We cant regenerate a new parquet file and communities when I just added a file when I have 1000s of files preprocessed. Like in other vector DBs we need...
It will not reprocess all documents… but adding new documents will lead to needing to update a lot of existing claims and community notes… and those are what will take...
I have tested gemma2 and llama3 with success. The only thing that does not work locally is the embeddings. There need to be a fix to accept the style of...
Embeddings are not working with Ollama... I was able to get things working with Ollama for the entities and openai for embeddings.
Working config can be found here: https://github.com/microsoft/graphrag/issues/339#issuecomment-2206149531
> Ollama works as expected > > ```shell > GRAPHRAG_API_KEY=123 > GRAPHRAG_API_BASE=http://172.17.0.1:11434/v1 > # GRAPHRAG_LLM_MODEL=llama3:instruct > GRAPHRAG_LLM_MODEL=codestral > GRAPHRAG_LLM_THREAD_COUNT=4 > GRAPHRAG_LLM_CONCURRENT_REQUESTS=8 > GRAPHRAG_LLM_MAX_TOKENS=2048 > > GRAPHRAG_EMBEDDING_API_BASE=http://172.17.0.1:11435/v1 > GRAPHRAG_EMBEDDING_MODEL=mxbai-embed-large > ```...
> @bmaltais, no worries! > > 11435 is a proxy server written in JS/Node to specifically map request/response between OAI and Ollama formats, I didn't list the whole code as...
@SpaceLearner Does it work when you try to query? I adapted your code to work with langchain, it create the embeddings... but when I try to do a local query...