[Question]: The file parsing process is stuck and not progressing.
Self Checks
- [x] I have searched for existing issues search for existing issues, including closed ones.
- [x] I confirm that I am using English to submit this report (Language Policy).
- [x] Non-english title submitions will be closed directly ( 非英文标题的提交将会被直接关闭 ) (Language Policy).
- [x] Please do not modify this template :) and fill in all the required fields.
Describe your problem
### Launch service from source for development:
### Everything is working fine, but the document parsing process is stuck and not progressing.
### The log is as follows. What could be the reason? Is there an issue with this startup method?
same here. Looks like a bug introduced recently (worked fine in 0.17.0). Also, cancelling the job actually doesn't cancel it and it continues running in the background
same here. Looks like a bug introduced recently (worked fine in 0.17.0). Also, cancelling the job actually doesn't cancel it and it continues running in the background
Yes, it continues running in the background and must be executed manually: pkill -f uwsgi pkill -f task_executor.py
The progress always under 1% without further moving could be caused by:
- Redis connection error, including redis is down.
- task_executor can not start properly which will be shown from error backend logs.
Same issue here, we did not have this bug with v0.17.0.
The progress always under 1% without further moving could be caused by:
- Redis connection error, including redis is down.
- task_executor can not start properly which will be shown from error backend logs.
The task itself runs and finishes fine. The problem is that the progress is not being updated and cancel button doesn't have effect - the jobs continues running. The server log also contains related messages: 2025-03-15 05:01:19,006 WARNING 15 set_progress(407323ca005d11f0b2bc0242ac120006) got exception DoesNotExist
Same Issue here
i have same problem and it stay at Task has been received. like this: 14:18:19 created task raptor 14:18:36 Task has been received. 16:01:32 Cluster one layer: 36 -> 6 17:08:43 Cluster one layer: 6 -> 4 17:55:36 Cluster one layer: 4 -> 3 18:28:39 Cluster one layer: 3 -> 2 18:47:55 Cluster one layer: 2 -> 1 18:47:58 Indexing done (2.33s). Task done (16162.47s) 18:48:03 created task graphrag 18:48:03 Task has been received.
same issue here on: v0.17.2 slim
redis seems fine from system page view:
Same issue. The progress in the logs are actually not applied to the front end progression.
Here, it seems that nothing is done at all, it's not just a progress update issue:
2025-03-18 15:18:58,861 INFO 31 task_consumer_0 reported heartbeat: {"name": "task_consumer_0", "now": "2025-03-18T15:18:58.860+01:00", "boot_at": "2025-03-18T15:05:26.248+01:00", "pending": 0, "lag": 0, "done": 1, "failed": 0, "current": {}}
Mine says 0.00s and stuck at 0.84% The logs though say, that OCR is being made/done. weird...
I was completely stuck because of this (even after restarting the whole stack), and I managed to make it work again by flushing Redis before restarting ragflow-server. (the redis instance in the stack is persistent by default, so is not flushed by a restart)
have same problem and it stay at Task has been received. like this:
same problem,Did you find a solution?
@dekonnection Search reported heartbeat in your log to see what the task_executor.py is doing.
https://github.com/infiniflow/ragflow/pull/6340 should have fixed this issue. Please try nightly image tomorrow.
i have same problem and it stay at Task has been received. like this: 14:18:19 created task raptor 14:18:36 Task has been received. 16:01:32 Cluster one layer: 36 -> 6 17:08:43 Cluster one layer: 6 -> 4 17:55:36 Cluster one layer: 4 -> 3 18:28:39 Cluster one layer: 3 -> 2 18:47:55 Cluster one layer: 2 -> 1 18:47:58 Indexing done (2.33s). Task done (16162.47s) 18:48:03 created task graphrag 18:48:03 Task has been received.
@yuzhichang Is there a PR to fix this issue?(有PR解决这个问题吗?)
#6340 should have fixed this issue. Please try nightly image tomorrow.
@yuzhichang I noticed that after clicking the parse button, the progress gets stuck. Upon checking the logs, I found that the task_executor is not printing any logs at all. I also discovered that there are 8 old task_executor processes still running. Could it be that the progress is stuck because the tasks were taken by other executors? Has this PR fixed this issue?