private-gpt icon indicating copy to clipboard operation
private-gpt copied to clipboard

[BUG] Private GPT has infinite loop of responce

Open SpkArtZen opened this issue 1 year ago • 5 comments

Question

I have an issue with Private GPT:

When I send a prompt or chat completion with a large context (file size > 5 KB or multiple context files), the chat takes a long time to generate a response but never sends it. It just keeps generating a response, and the delay gets worse. Eventually, it sends a timeout error.

I don’t know how to fix this. I need to get its initial response, but in the end, I don’t receive anything

SpkArtZen avatar Oct 28 '24 13:10 SpkArtZen

Can you give us more details about your environment? Probably, it will related to GPU and vRAM.

jaluma avatar Oct 30 '24 08:10 jaluma

Yes, i use default model llama 3.1 7B Знімок екрана 2024-10-30 102834

SpkArtZen avatar Oct 30 '24 08:10 SpkArtZen

Full logs: logs.txt I send single request from python sdk. It works the same with postman and curl

SpkArtZen avatar Oct 30 '24 09:10 SpkArtZen

It should work equally using postman and requests. Can you increate request timeout?

client = PrivateGPTApi(base_url="http://localhost:8001", client=...)

And two mode things to take into account:

  1. When you use all window context, reply will take more time in reply, it's normal.
  2. Probably, use a large context instead of using RAG strategies not be the best way to afford this kind of problems.
  3. Consider increase Ollama timeout if you continue having problems as your log. You can do modifying LLMComponent, ollama statement.

jaluma avatar Nov 04 '24 08:11 jaluma

The main problem is that when I send a request, even through Postman, the response is generated multiple times and degrades each time. The same with sdk and Postman. Also, it itself sends a request:

2024-11-04 15:36:54 13:36:54.133 [INFO ] httpx - HTTP Request: POST http://localhost:11434/api/chat "HTTP/1.1 200 OK" 2024-11-04 15:36:59 [GIN] 2024/11/04 - 13:36:59 | 200 | 5.996617632s | 127.0.0.1 | POST "/api/chat"

After that its generate responce again. I need somehow accept only first responce.

SpkArtZen avatar Nov 04 '24 13:11 SpkArtZen