UncleCode
UncleCode
@mjtechguy We've fixed this issue, and soon we'll be releasing v0.3.0, which is fully async and has less dependency on other libraries. The Docker image has also been updated.
Thanks for reporting this, @Harinib-Kore After reviewing the code based on your report, we can confirm this is indeed a bug related to how `max_pages` is handled within the deep...
@openainext Hi, can you explain more on this? Thx
@crelocks Thanks for using our library. I'm glad you requested offline crawling; it's been suggested by other users. We can pass a local file or HTML by using its path...
@crelocks In the version 0.3.74, It crawled beautifully in my test. Stay tuned for the release within a day or two, then test and let me know. ```python import os,...
@crelocks Try the code below. This should work, although I am using 0.4.24 which will be out in 1 or 2 days. I close the issue but you are welcome...
Perfect, so I close the issue but you are welcome to continue if any help you need
@kamit-transient You can also pass raw html to `arun()` function, so you dont need to call `aprocess_html()`, please check this issue, I explained: https://github.com/unclecode/crawl4ai/issues/381
I suggest you go to the root of the projects and edit the main.py file. There, you will see that we applied the rate limit in different ways. If you...
@monkeybiz55 Thank you so much and for your nice words. I'm glad to know that it was helpful. Let me know if you need any help for this rate limit....