graphrag icon indicating copy to clipboard operation
graphrag copied to clipboard

[BUG]: Errors occurred during the pipeline run, see logs for more details.

Open Jumbo-zczlbj0 opened this issue 1 year ago • 7 comments

Describe the bug

"Sometimes, running the command python3 -m graphrag.index --root ./ragtest results in the error 'Errors occurred during the pipeline run, see logs for more details,' even though no configuration changes were made. This issue may occur after restarting the computer. I have tried deleting the original environment and creating a new one. Sometimes it works well, and sometimes it doesn't."

The previous model was running smoothly and had successfully answered my questions. And my Ollama is functioning properly and the model has been downloaded.

I am a beginner, so I might not understand everything fully. Please bear with me

Screenshot from 2024-07-16 04-19-19

settings.yaml: Screenshot from 2024-07-16 04-12-24 Screenshot from 2024-07-16 04-12-15

log: (ragtest/output/20240716-034934/reports/indexing-engine.log)

Screenshot from 2024-07-16 04-16-05

Screenshot from 2024-07-16 04-16-15

indexing-engine.log

Steps to reproduce

  1. Run Lm-studio

  2. chmod +x LM_Studio-0.2.27.AppImage

  3. python3 -m graphrag.index --root ./ragtest

Expected Behavior

I expect the LLM and embedding to process my data correctly

GraphRAG Config Used

No response

Logs and screenshots

No response

Additional Information

  • GraphRAG Version: 0.1.1 ( pip install graphrag )
  • Operating System: Ubuntu 22.04
  • Python Version: 3.12.4
  • llm: ollama/gemma2:latest
  • embeddings: lm-studio (nomic-embed-text-v1.5.Q5_K_M.gguf)

Jumbo-zczlbj0 avatar Jul 16 '24 08:07 Jumbo-zczlbj0

  1. /ragtest/.env:

GRAPHRAG_API_KEY=<API_KEY>

  1. ragtest/output/20240716-035359/artifacts/stats.json:

Screenshot from 2024-07-16 04-25-14

Jumbo-zczlbj0 avatar Jul 16 '24 08:07 Jumbo-zczlbj0

To: Jumbo: how to run chmod +x LM_Studio-0.2.27.AppImage in windows 10?

I can not find LM_Studio-0.2.27.AppImage file in LM_Studio installation directory. If this is a *.exe file?

thanks

myyourgit avatar Jul 17 '24 14:07 myyourgit

To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly.

And suggestion about setting.yaml?

in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest",

Thanks

myyourgit avatar Jul 17 '24 14:07 myyourgit

To: Jumbo: how to run chmod +x LM_Studio-0.2.27.AppImage in windows 10?

I can not find LM_Studio-0.2.27.AppImage file in LM_Studio installation directory. If this is a *.exe file?

thanks

I use Lm_studio on an Ubuntu system. You can download the Windows version from the official website: https://lmstudio.ai

Jumbo-zczlbj0 avatar Jul 17 '24 14:07 Jumbo-zczlbj0

To: Jumbo: how to run chmod +x LM_Studio-0.2.27.AppImage in windows 10? I can not find LM_Studio-0.2.27.AppImage file in LM_Studio installation directory. If this is a *.exe file? thanks

I use Lm_studio on an Ubuntu system. You can download the Windows version from the official website: https://lmstudio.ai

I use windows10. In windows, I just run LM_Studio, I think this is OK.

myyourgit avatar Jul 17 '24 14:07 myyourgit

To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly.

And suggestion about setting.yaml?

in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest",

Thanks

I don't know if you have installed Ollama. I referred to this video: https://youtu.be/BLyGDTNdad0?si=Pvx1og9aD_5fmZa3

Jumbo-zczlbj0 avatar Jul 17 '24 15:07 Jumbo-zczlbj0

I installed graphrag in Docker to avoid this bug.

I am using the official NVIDIA docker image (CUDA =12.2, Ubuntu22.04, devel)

By the way, LM_Studio can be replaced with llama.cpp: https://github.com/ggerganov/llama.cpp

Jumbo-zczlbj0 avatar Jul 17 '24 15:07 Jumbo-zczlbj0

To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly. And suggestion about setting.yaml? in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest", Thanks

I don't know if you have installed Ollama. I referred to this video: https://youtu.be/BLyGDTNdad0?si=Pvx1og9aD_5fmZa3

Hi,Jumbo: thanks.

in ollama directory, I run ollama pull gemma2:9b

ollama run gemma2:9b it works.

but the problem is that when running below command

curl http://localhost:11434/v1/chat/completions

the result is: 404 page not found

this means that gemma2 port is not recognized.

myyourgit avatar Jul 19 '24 13:07 myyourgit

To: Jumbo: I use your setting.yaml, run LM_studio, but python -m graphrag.index --root ./ragtest could not run correctly. And suggestion about setting.yaml? in logs, api_key is not recognized. "llm": { "api_key": "REDACTED, length 6", "type": "openai_chat", "model": "gemma2:latest", Thanks

I don't know if you have installed Ollama. I referred to this video: https://youtu.be/BLyGDTNdad0?si=Pvx1og9aD_5fmZa3

Hi,Jumbo: thanks.

in ollama directory, I run ollama pull gemma2:9b

ollama run gemma2:9b it works.

but the problem is that when running below command

curl http://localhost:11434/v1/chat/completions

the result is: 404 page not found

this means that gemma2 port is not recognized.

Please refer to the official way:https://github.com/ollama/ollama/blob/main/docs/api.md

For example: curl http://localhost:11434/api/chat -d '{ "model": "gemm2:latest", "messages": [ { "role": "user", "content": "hi" } ] }'

IMG_7056

Jumbo-zczlbj0 avatar Jul 19 '24 13:07 Jumbo-zczlbj0

Consolidating alternate model issues here: https://github.com/microsoft/graphrag/issues/657

natoverse avatar Jul 22 '24 20:07 natoverse