crawl4ai
crawl4ai copied to clipboard
[Bug]: unable to perform operation on <WriteUnixTransport closed=True reading=False>; the handler is closed
crawl4ai version
0.5.0.post8
Expected Behavior
-
The crawl4ai crawler should consistently work across multiple API calls in the FastAPI application.
-
The browser context should remain active and usable throughout the application's lifespan, without throwing BrowserContext.new_page or transport-related errors.
Current Behavior
When using crawl4ai within a FastAPI backend, the first API call to the web scraper occasionally works. However, on subsequent API calls, the crawler fails with the following error:
Error: BrowserContext.new_page: unable to perform operation on <WriteUnixTransport closed=True reading=False 0x...>; the handler is closed
This issue does not occur when running the crawler as a standalone Python script — only when it's integrated into a FastAPI application.
Is this reproducible?
Yes
Inputs Causing the Bug
* **URL(s)** – Any websites (in my case, I used 50+ URLs for scraping)
* **Settings Used** – Below are the settings used:
md_generator = DefaultMarkdownGenerator(
options={
"body_width": 100,
"escape_html": False,
"ignore_images": True,
}
)
config = CrawlerRunConfig(
markdown_generator=md_generator,
word_count_threshold=10,
exclude_external_links=True,
exclude_internal_links=True,
exclude_external_images=True,
)
Steps to Reproduce
1. Set up a FastAPI application.
2. Initialize AsyncWebCrawler in FastAPI's lifespan event.
3. Expose a scraping route that calls crawler.arun_many(...).
5. Call the API endpoint multiple times (works once or intermittently).
6. Observe crash in subsequent calls.
Code snippets
# main.py
from fastapi import FastAPI
from contextlib import asynccontextmanager
from crawl4ai import AsyncWebCrawler
crawler = None
@asynccontextmanager
async def lifespan(app: FastAPI):
global crawler
crawler = AsyncWebCrawler()
await crawler.__aenter__() # Also tried crawler.start()
yield
await crawler.__aexit__(None, None, None) # Also tried crawler.close()
app = FastAPI(lifespan=lifespan)
# router.py or service.py
from main import crawler
async def scrap_website():
urls = ["https://example.com"] # Any valid URL triggers this issue
config = {} # Add your config if needed
unique_contents = set()
unique_markdowns = []
results = await crawler.arun_many(urls=urls, config=config)
for result in results:
if result.success:
text = result.markdown
content_hash = hash(text)
if content_hash not in unique_contents:
unique_contents.add(content_hash)
unique_markdowns.append(text)
return "\n\n".join(unique_markdowns)
OS
Ubuntu 24.04.1 LTS
Python version
3.11
Browser
Chrome (via Playwright)
Browser version
No response
Error logs & Screenshots (if applicable)
× Unexpected error in _crawl_web at line 528 in wrap_api_call
Error: BrowserContext.new_page: unable to perform operation on <WriteUnixTransport closed=True reading=False 0x...>; the handler is closed