crawl4ai icon indicating copy to clipboard operation
crawl4ai copied to clipboard

feat(crawler): add content change detection and timestamp management

Open yesidc opened this issue 6 months ago • 1 comments

Summary

This PR introduces a new feature for cache management - content change detection. Users can now specify whether they want to check if web content has changed before using a cached version.

When check_content_changed is enabled in `CrawlerRunConfig the crawler will use a multi-tiered approach to determine if content has changed:

  1. First, it checks cache-control headers for max-age directives
  2. Then, it makes a conditional HEAD request using ETags and Last-Modified headers
  3. Finally, it falls back to a configurable default TTL

Previously, the caching system would simply retrieve cached content without checking if the underlying web content had changed. This meant that users might receive stale data without knowing it.

With this new feature, users can now ensure they're always working with fresh content while still benefiting from caching performance. If the content has changed, the crawler automatically invalidates the cache, fetches fresh content, and updates the database.

The implementation is efficient, using low-latency HEAD requests and respecting standard HTTP caching mechanisms, minimizing unnecessary full page retrievals.

This enhancement provides significant value for applications requiring both performance and data freshness, and gives users fine-grained control over their caching strategy.

List of files changed and why

  1. async_configs.py : added the following parameters to enable content change detections:
  • check_content_changed (bool): If True, check if the content has changed before processing. Default: False.

  • head_request_timeout (float): Timeout in seconds for the initial HEAD request. Default: 3.0.

  • default_cache_ttl_seconds (int): Default time-to-live for cached responses in seconds. Default: None.

  1. async_webcrawler.py: added _check_content_changed method to check if the cached content has changed. Also modified arun accordigly.
  2. async_database: Added a new column timestamp and a method adelete_data_point to delete urls from db.

How Has This Been Tested?

Basic Testing:

  1. Crawl a URL with check_content_changed=False (default behavior)
  • Crawl the same URL again - it should use the cached version
  • Modify the website content (or wait for it to change naturally)
  • Crawl with check_content_changed=True - it should detect changes and fetch fresh content
  1. Find a URL that returns proper caching headers (Cache-Control, ETag, Last-Modified)
  • Crawl with check_content_changed=True
  • Crawl again immediately - it should use cache (fresh according to headers)

Summary by CodeRabbit

  • New Features

    • Added advanced content change detection before reusing cached crawl data, improving cache accuracy and freshness.
    • Introduced new configuration options for controlling content change checks, request timeouts, and cache TTL.
    • Added support for deleting specific cached entries.
  • Bug Fixes

    • Enhanced handling of missing cached media (screenshots or PDFs) to ensure proper recrawling.
  • Chores

    • Updated database schema to include a timestamp for cached data, enabling more precise cache management.

yesidc avatar May 26 '25 15:05 yesidc

Walkthrough

The changes extend the web crawler's configuration, database schema, and crawling logic to support cache freshness validation. New configuration options and a timestamp column are introduced. The crawler now checks if cached content is still fresh using HTTP headers, HEAD requests, or TTL before deciding to reuse or recrawl data. A method for deleting specific cache entries is also added.

Changes

File(s) Change Summary
crawl4ai/async_configs.py Added new config options: check_content_changed, head_request_timeout, default_cache_ttl_seconds to CrawlerRunConfig.
crawl4ai/async_database.py Added timestamp column to cache schema, included timestamp in cache logic, and added adelete_data_point async method.
crawl4ai/async_webcrawler.py Added _check_content_changed async method, integrated content change detection in crawl logic, and updated cache usage strategy.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant AsyncWebCrawler
    participant AsyncDatabaseManager
    participant WebServer

    User->>AsyncWebCrawler: arun(url, config)
    AsyncWebCrawler->>AsyncDatabaseManager: get_cached_result(url)
    AsyncDatabaseManager-->>AsyncWebCrawler: cached_result (with timestamp)
    alt Cached result exists and config.check_content_changed
        AsyncWebCrawler->>AsyncWebCrawler: _check_content_changed(cached_result, url, config)
        alt Content unchanged
            AsyncWebCrawler-->>User: Return cached content
        else Content changed
            AsyncWebCrawler->>AsyncDatabaseManager: adelete_data_point(url)
            AsyncWebCrawler->>WebServer: Fetch fresh content
            AsyncWebCrawler->>AsyncDatabaseManager: Store new result with timestamp
            AsyncWebCrawler-->>User: Return fresh content
        end
    else No cached result or check disabled
        AsyncWebCrawler->>WebServer: Fetch fresh content
        AsyncWebCrawler->>AsyncDatabaseManager: Store new result with timestamp
        AsyncWebCrawler-->>User: Return fresh content
    end

Poem

In the warren of code, a cache now can see
If content’s still fresh, or changed as can be.
With timestamps and headers, we sniff out the truth—
To recrawl or reuse, we now have the proof.
Hooray for the bunny, whose logic is keen,
The freshest of data, and caches kept clean!
🐇✨

✨ Finishing Touches
  • [ ] 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

coderabbitai[bot] avatar May 26 '25 15:05 coderabbitai[bot]