UncleCode
UncleCode
@zhixiongtan sent :)
Sending the links @wakaka6
@wakaka6 Already sent. and Thank you so much for your suggestion. It has been in our plans for a while. I wanted to engage with everything, like the raw material,...
You are most welcome @berkaygkv
@monkey-wenjun @Shadow062309 @duolaOmeng Thank you for trying Crawl4ai. In such situations, the first step is to run the crawler with headless set to false to see what is happening. If...
@Aravind1Kumar That's a very odd error. I can not replicate it. You can't use the same `css_selector` that I provided as an example for the other domain in this domain...
@matijaparavac We're building our scraper engine, which will soon be available in the Crawl4ai library. We started by focusing on a robust, fast, and asynchronous approach to crawl a single...
 @ejkitchen Could you please share the URL you're trying? It works on my side, as seen in this image, and shouldn't require any flag by default; maybe it's dirt....
Sure, for examples: ```python import asyncio from crawl4ai import AsyncWebCrawler async def main(): async with AsyncWebCrawler(headless=True) as crawler: result = await crawler.arun( url="https://en.wikipedia.org/wiki/apple", bypass_cache=True, ) print(result.response_headers) if __name__ == "__main__":...
@ejkitchen I've figured out the issue. You were right - it occurs when `bypass_cache` isn't set to true. I noticed that in this case, the code reads the cached version...