crawl4ai
crawl4ai copied to clipboard
[Bug]: Task stuck in processing state
crawl4ai version
v0.5.x
Expected Behavior
I am looping over 140 webpages that I want to crawl. That works fine for the first couple or so.
Current Behavior
At random pages independent of the content of that page a task gets created. It then gets indefinitely stuck in processing state. I can call the GET endpoint to check status. For every other page I then crawl a new task is created and is in a queue state. There is no way for me to either 1) kill that one task nor 2) to figure out why it is stuck.
Is this reproducible?
Yes
Inputs Causing the Bug
I could reproduce the bug by continuously crawling through a list of URLs. At random points (iteration 30-70) this behavior appears.
If i call the same webpages where this stuck task behavior appears individually one-off they work.
Steps to Reproduce
Code snippets
OS
Linux
Python version
3.7
Browser
No response
Browser version
No response
Error logs & Screenshots (if applicable)
No response