crawl4ai
crawl4ai copied to clipboard
feat: Add prompt-driven recursive crawler script
This commit introduces a new example script, prompt_driven_crawler.py, located in docs/examples/.
The script enables you to perform a recursive crawl starting from a given URL. It uses an LLM (OpenAI GPT model) to extract content relevant to your provided prompt.
Key features:
- Takes a start URL and a natural language prompt as input.
- Recursively crawls pages up to a specified depth.
- Uses
KeywordRelevanceScorerto guide the crawler towards links relevant to your prompt. - Employs
LLMContentFilterto extract information pertinent to the prompt from each crawled page. - Saves the extracted content for each page as a separate Markdown file in your specified output directory (
output_dir/markdown/). - Generates a
scraped_data.jsonfile inoutput_dir/summarizing the crawl, including URLs, prompts, paths to markdown files, page titles, and relevance scores. - Requires an
OPENAI_API_KEYenvironment variable.
A prompt_driven_crawler_README.md has been added to explain setup and usage.
The openai>=1.0.0 dependency has also been added to the main requirements.txt file.
Summary
Please include a summary of the change and/or which issues are fixed.
eg: Fixes #123 (Tag GitHub issue numbers in this format, so it automatically links the issues with your PR)
List of files changed and why
eg: quickstart.py - To update the example as per new changes
How Has This Been Tested?
Please describe the tests that you ran to verify your changes.
Checklist:
- [x] My code follows the style guidelines of this project
- [x] I have performed a self-review of my own code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have made corresponding changes to the documentation
- [x] I have added/updated unit tests that prove my fix is effective or that my feature works
- [x] New and existing unit tests pass locally with my changes
Summary by CodeRabbit
-
New Features
- Introduced a prompt-driven web crawler script that uses AI to extract relevant information from websites based on a user-provided prompt.
- Supports command-line configuration for start URL, prompt, output directory, and crawl depth.
- Saves extracted content as Markdown files and generates a JSON summary of results.
-
Documentation
- Added a README with usage instructions, prerequisites, and output details for the new crawler script.
-
Chores
- Added the OpenAI library as a required dependency.
[!CAUTION]
Review failed
The pull request is closed.
Walkthrough
A new example script, prompt_driven_crawler.py, and its accompanying README have been added, demonstrating a prompt-driven asynchronous web crawler using the crawl4ai library and OpenAI LLMs. The crawler extracts relevant content based on user prompts and saves results as Markdown and JSON files. The OpenAI Python package is now listed as a dependency.
Changes
| File(s) | Change Summary |
|---|---|
| docs/examples/prompt_driven_crawler.py | Added an asynchronous script for prompt-driven web crawling, content extraction, and markdown/JSON output. |
| docs/examples/prompt_driven_crawler_README.md | Added a README explaining usage, prerequisites, output structure, and instructions for the new crawler example script. |
| requirements.txt | Added openai>=1.0.0 as a new dependency. |
Assessment against linked issues
| Objective (Issue #) | Addressed | Explanation |
|---|---|---|
| Timeout setting (#123) | ❌ | No timeout configuration or handling is present. |
Poem
A rabbit with prompts, a crawler at hand,
Hops through the web, across digital land.
Markdown and JSON, neatly in tow,
With OpenAI’s help, the insights will flow.
New scripts and docs, dependencies too—
This bunny’s web quest is ready for you! 🐇✨
[!NOTE]
⚡️ AI Code Reviews for VS Code, Cursor, Windsurf
CodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback. Learn more here.
[!NOTE]
⚡️ Faster reviews with caching
CodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 30th. To opt out, configure
Review - Disable Cacheat either the organization or repository level. If you prefer to disable all data retention across your organization, simply turn off theData Retentionsetting under your Organization Settings. Enjoy the performance boost—your workflow just got faster.
📜 Recent review details
Configuration used: CodeRabbit UI Review profile: CHILL Plan: Pro Cache: Disabled due to data retention organization setting Knowledge Base: Disabled due to data retention organization setting
📥 Commits
Reviewing files that changed from the base of the PR and between 897e0173618d20fea5d8952ccdbcdad0febc0fee and c5a0f330114919e6d1d0071dc31d6399105e9c3f.
📒 Files selected for processing (3)
docs/examples/prompt_driven_crawler.py(1 hunks)docs/examples/prompt_driven_crawler_README.md(1 hunks)requirements.txt(1 hunks)
✨ Finishing Touches
- [ ] 📝 Generate Docstrings
🪧 Tips
Chat
There are 3 ways to chat with CodeRabbit:
- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
I pushed a fix in commit <commit_id>, please review it.Explain this complex logic.Open a follow-up GitHub issue for this discussion.
- Files and specific lines of code (under the "Files changed" tab): Tag
@coderabbitaiin a new review comment at the desired location with your query. Examples:@coderabbitai explain this code block.@coderabbitai modularize this function.
- PR comments: Tag
@coderabbitaiin a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.@coderabbitai read src/utils.ts and explain its main purpose.@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.@coderabbitai help me debug CodeRabbit configuration file.
Support
Need help? Create a ticket on our support page for assistance with any issues or questions.
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.
CodeRabbit Commands (Invoked using PR comments)
@coderabbitai pauseto pause the reviews on a PR.@coderabbitai resumeto resume the paused reviews.@coderabbitai reviewto trigger an incremental review. This is useful when automatic reviews are disabled for the repository.@coderabbitai full reviewto do a full review from scratch and review all the files again.@coderabbitai summaryto regenerate the summary of the PR.@coderabbitai generate docstringsto generate docstrings for this PR.@coderabbitai generate sequence diagramto generate a sequence diagram of the changes in this PR.@coderabbitai resolveresolve all the CodeRabbit review comments.@coderabbitai configurationto show the current CodeRabbit configuration for the repository.@coderabbitai helpto get help.
Other keywords and placeholders
- Add
@coderabbitai ignoreanywhere in the PR description to prevent this PR from being reviewed. - Add
@coderabbitai summaryto generate the high-level summary at a specific location in the PR description. - Add
@coderabbitaianywhere in the PR title to generate the title automatically.
CodeRabbit Configuration File (.coderabbit.yaml)
- You can programmatically configure CodeRabbit by adding a
.coderabbit.yamlfile to the root of your repository. - Please see the configuration documentation for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation:
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
Documentation and Community
- Visit our Documentation for detailed information on how to use CodeRabbit.
- Join our Discord Community to get help, request features, and share feedback.
- Follow us on X/Twitter for updates and announcements.
done