crawl4ai
crawl4ai copied to clipboard
Page.goto: Timeout 60000ms exceeded.
could you please add the possibility to change the timeout, in some places and containers could take more than 60 seconds crawl4ai/crawl4ai/async_crawler_strategy.py
line 251
response = await page.goto(url, wait_until="domcontentloaded", timeout=60000)
[ERROR] 🚫 Failed to crawl "link", error: Failed to crawl "link": Page.goto: Timeout 60000ms exceeded. Call log: navigating to "link "domcontentloaded"
Hi @jmontoyavallejo We've already added this feature, thank you for the suggestion. It's a really good idea. Although we haven't released the new version yet, you can currently pull from this "0.3.6" branch, but by Monday or Tuesday, we will also update the library in Pypi.
Hi @jmontoyavallejo We've already added this feature, thank you for the suggestion. It's a really good idea. Although we haven't released the new version yet, you can currently pull from this "0.3.6" branch, but by Monday or Tuesday, we will also update the library in Pypi.
how to do it after updating the version?
In the invocation of the function like crawler.arun(...,page_timeout=120000) it worked for me
@jmontoyavallejo We will update our website docs very soon to contains details for timeout.