UncleCode
UncleCode
@ManojBhamsagar-Draup We already added more granularity to control how you want to handle internal and external links for images as well as anchor tags. The following code demonstrates a better...
If you are asking if we can download files within the process of crawling and scraping, the answer is yes. I will provide the code snippet right here. If you...
@Olliejp Please try the following code: ```python async def main(): async with AsyncWebCrawler( headless=False, # Set to False to see what is happening verbose=True, ) as crawler: result = await...
@Olliejp you are right that you can use the delay before returning the final HTML; absolutely, it's a good way to not be dependent on its specific HTML elements on...
@razorxl Thanks for your kind words. May I know please what "Docker features" you are referring to? And for your information, the new version, 0.3.74, the scraping process became very...
@rafliekawangsha @elijahndege for the server, we set a standard API token - a token that is only available for you to use the system and platform. Because when you put...
@sjhm131 @elijahndege The 403 error occurs because the crawl endpoint requires authentication. To avoid the 403 “Not authenticated” error, you need to provide a `CRAWL4AI_API_TOKEN`. This applies whether you’re running...
@sjhm131 My apologies for the confusion! There’s no paid API involved, this is entirely open-source. When you run the Docker, you're creating your own server for Crawl for AI. The...
@vikaskookna @bigbrother666sh thx for sharing, let me check. @aravindkarnam can you work on it? I still not clear what means "it returns only homepage"
@Dev4011 tux for kind words, you right, not just return the redirected url but also the status code should be updated. @aravindkarnam plz add these to list