CyberStrikeAI
CyberStrikeAI copied to clipboard
CyberStrikeAI is an AI-native security testing platform built in Go. It integrates 100+ security tools, an intelligent orchestration engine, and comprehensive lifecycle management capabilities.
CyberStrikeAI
中文 | English
CyberStrikeAI is an AI-native security testing platform built in Go. It integrates 100+ security tools, an intelligent orchestration engine, and comprehensive lifecycle management capabilities. Through native MCP protocol and AI agents, it enables end-to-end automation from conversational commands to vulnerability discovery, attack-chain analysis, knowledge retrieval, and result visualization—delivering an auditable, traceable, and collaborative testing environment for security teams.
Interface & Integration Preview
Web Console
MCP Integration
- MCP stdio mode

- MCP management

Attack Chain Visualization
Vulnerability Management
Task Management
Role Management
Highlights
- 🤖 AI decision engine with OpenAI-compatible models (GPT, Claude, DeepSeek, etc.)
- 🔌 Native MCP implementation with HTTP/stdio/SSE transports and external MCP federation
- 🧰 100+ prebuilt tool recipes + YAML-based extension system
- 📄 Large-result pagination, compression, and searchable archives
- 🔗 Attack-chain graph, risk scoring, and step-by-step replay
- 🔒 Password-protected web UI, audit logs, and SQLite persistence
- 📚 Knowledge base with vector search and hybrid retrieval for security expertise
- 📁 Conversation grouping with pinning, rename, and batch management
- 🛡️ Vulnerability management with CRUD operations, severity tracking, status workflow, and statistics
- 📋 Batch task management: create task queues, add multiple tasks, and execute them sequentially
- 🎭 Role-based testing: predefined security testing roles (Penetration Testing, CTF, Web App Scanning, etc.) with custom prompts and tool restrictions
Tool Overview
CyberStrikeAI ships with 100+ curated tools covering the whole kill chain:
- Network Scanners – nmap, masscan, rustscan, arp-scan, nbtscan
- Web & App Scanners – sqlmap, nikto, dirb, gobuster, feroxbuster, ffuf, httpx
- Vulnerability Scanners – nuclei, wpscan, wafw00f, dalfox, xsser
- Subdomain Enumeration – subfinder, amass, findomain, dnsenum, fierce
- Network Space Search Engines – fofa_search, zoomeye_search
- API Security – graphql-scanner, arjun, api-fuzzer, api-schema-analyzer
- Container Security – trivy, clair, docker-bench-security, kube-bench, kube-hunter
- Cloud Security – prowler, scout-suite, cloudmapper, pacu, terrascan, checkov
- Binary Analysis – gdb, radare2, ghidra, objdump, strings, binwalk
- Exploitation – metasploit, msfvenom, pwntools, ropper, ropgadget
- Password Cracking – hashcat, john, hashpump
- Forensics – volatility, volatility3, foremost, steghide, exiftool
- Post-Exploitation – linpeas, winpeas, mimikatz, bloodhound, impacket, responder
- CTF Utilities – stegsolve, zsteg, hash-identifier, fcrackzip, pdfcrack, cyberchef
- System Helpers – exec, create-file, delete-file, list-files, modify-file
Basic Usage
Quick Start (One-Command Deployment)
Prerequisites:
One-Command Deployment:
git clone https://github.com/Ed1s0nZ/CyberStrikeAI.git
cd CyberStrikeAI-main
chmod +x run.sh && ./run.sh
The run.sh script will automatically:
- ✅ Check and validate Go & Python environments
- ✅ Create Python virtual environment
- ✅ Install Python dependencies
- ✅ Download Go dependencies
- ✅ Build the project
- ✅ Start the server
First-Time Configuration:
- Configure OpenAI-compatible API (required before first use)
- Open http://localhost:8080 after launch
- Go to
Settings→ Fill in your API credentials:openai: api_key: "sk-your-key" base_url: "https://api.openai.com/v1" # or https://api.deepseek.com/v1 model: "gpt-4o" # or deepseek-chat, claude-3-opus, etc. - Or edit
config.yamldirectly before launching
- Login - Use the auto-generated password shown in the console (or set
auth.passwordinconfig.yaml) - Install security tools (optional) - Install tools as needed:
AI automatically falls back to alternatives when a tool is missing.# macOS brew install nmap sqlmap nuclei httpx gobuster feroxbuster subfinder amass # Ubuntu/Debian sudo apt-get install nmap sqlmap nuclei httpx gobuster feroxbuster
Alternative Launch Methods:
# Direct Go run (requires manual setup)
go run cmd/server/main.go
# Manual build
go build -o cyberstrike-ai cmd/server/main.go
./cyberstrike-ai
Note: The Python virtual environment (venv/) is automatically created and managed by run.sh. Tools that require Python (like api-fuzzer, http-framework-test, etc.) will automatically use this environment.
Core Workflows
- Conversation testing – Natural-language prompts trigger toolchains with streaming SSE output.
- Role-based testing – Select from predefined security testing roles (Penetration Testing, CTF, Web App Scanning, API Security Testing, etc.) to customize AI behavior and tool availability. Each role applies custom system prompts and can restrict available tools for focused testing scenarios.
- Tool monitor – Inspect running jobs, execution logs, and large-result attachments.
- History & audit – Every conversation and tool invocation is stored in SQLite with replay.
- Conversation groups – Organize conversations into groups, pin important groups, rename or delete groups via context menu.
- Vulnerability management – Create, update, and track vulnerabilities discovered during testing. Filter by severity (critical/high/medium/low/info), status (open/confirmed/fixed/false_positive), and conversation. View statistics and export findings.
- Batch task management – Create task queues with multiple tasks, add or edit tasks before execution, and run them sequentially. Each task executes as a separate conversation, with status tracking (pending/running/completed/failed/cancelled) and full execution history.
- Settings – Tweak provider keys, MCP enablement, tool toggles, and agent iteration limits.
Built-in Safeguards
- Required-field validation prevents accidental blank API credentials.
- Auto-generated strong passwords when
auth.passwordis empty. - Unified auth middleware for every web/API call (Bearer token flow).
- Timeout and sandbox guards per tool, plus structured logging for triage.
Advanced Usage
Role-Based Testing
- Predefined roles – System includes 12+ predefined security testing roles (Penetration Testing, CTF, Web App Scanning, API Security Testing, Binary Analysis, Cloud Security Audit, etc.) in the
roles/directory. - Custom prompts – Each role can define a
user_promptthat prepends to user messages, guiding the AI to adopt specialized testing methodologies and focus areas. - Tool restrictions – Roles can specify a
toolslist to limit available tools, ensuring focused testing workflows (e.g., CTF role restricts to CTF-specific utilities). - Easy role creation – Create custom roles by adding YAML files to the
roles/directory. Each role definesname,description,user_prompt,icon,tools, andenabledfields. - Web UI integration – Select roles from a dropdown in the chat interface. Role selection affects both AI behavior and available tool suggestions.
Creating a custom role (example):
- Create a YAML file in
roles/(e.g.,roles/custom-role.yaml):name: Custom Role description: Specialized testing scenario user_prompt: You are a specialized security tester focusing on API security... icon: "\U0001F4E1" tools: - api-fuzzer - arjun - graphql-scanner enabled: true - Restart the server or reload configuration; the role appears in the role selector dropdown.
Tool Orchestration & Extensions
- YAML recipes in
tools/*.yamldescribe commands, arguments, prompts, and metadata. - Directory hot-reload – pointing
security.tools_dirto a folder is usually enough; inline definitions inconfig.yamlremain supported for quick experiments. - Large-result pagination – outputs beyond 200 KB are stored as artifacts retrievable through the
query_execution_resulttool with paging, filters, and regex search. - Result compression – multi-megabyte logs can be summarized or losslessly compressed before persisting to keep SQLite lean.
Creating a custom tool (typical flow)
- Copy an existing YAML file from
tools/(for exampletools/sample.yaml). - Update
name,command,args, andshort_description. - Describe positional or flag parameters in
parameters[]so the agent knows how to build CLI arguments. - Provide a longer
description/notesblock if the agent needs extra context or post-processing tips. - Restart the server or reload configuration; the new tool becomes available immediately and can be enabled/disabled from the Settings panel.
Attack-Chain Intelligence
- AI parses each conversation to assemble targets, tools, vulnerabilities, and relationships.
- The web UI renders the chain as an interactive graph with severity scoring and step replay.
- Export the chain or raw findings to external reporting pipelines.
MCP Everywhere
- Web mode – ships with HTTP MCP server automatically consumed by the UI.
- MCP stdio mode –
go run cmd/mcp-stdio/main.goexposes the agent to Cursor/CLI. - External MCP federation – register third-party MCP servers (HTTP, stdio, or SSE) from the UI, toggle them per engagement, and monitor their health and call volume in real time.
MCP stdio quick start
- Build the binary (run from the project root):
go build -o cyberstrike-ai-mcp cmd/mcp-stdio/main.go - Wire it up in Cursor
OpenSettings → Tools & MCP → Add Custom MCP, pick Command, then point to the compiled binary and your config:
Replace the paths with your local locations; Cursor will launch the stdio server automatically.{ "mcpServers": { "cyberstrike-ai": { "command": "/absolute/path/to/cyberstrike-ai-mcp", "args": [ "--config", "/absolute/path/to/config.yaml" ] } } }
MCP HTTP quick start
- Ensure
config.yamlhasmcp.enabled: trueand adjustmcp.host/mcp.portif you need a non-default binding (localhost:8081 works well for local Cursor usage). - Start the main service (
./run.shorgo run cmd/server/main.go); the MCP endpoint lives athttp://<host>:<port>/mcp. - In Cursor, choose Add Custom MCP → HTTP and set
Base URLtohttp://127.0.0.1:8081/mcp. - Prefer committing the setup via
.cursor/mcp.jsonso teammates can reuse it:{ "mcpServers": { "cyberstrike-ai-http": { "transport": "http", "url": "http://127.0.0.1:8081/mcp" } } }
External MCP federation (HTTP/stdio/SSE)
CyberStrikeAI supports connecting to external MCP servers via three transport modes:
- HTTP mode – traditional request/response over HTTP POST
- stdio mode – process-based communication via standard input/output
- SSE mode – Server-Sent Events for real-time streaming communication
To add an external MCP server:
-
Open the Web UI and navigate to Settings → External MCP.
-
Click Add External MCP and provide the configuration in JSON format:
HTTP mode example:
{ "my-http-mcp": { "transport": "http", "url": "http://127.0.0.1:8081/mcp", "description": "HTTP MCP server", "timeout": 30 } }stdio mode example:
{ "my-stdio-mcp": { "command": "python3", "args": ["/path/to/mcp-server.py"], "description": "stdio MCP server", "timeout": 30 } }SSE mode example:
{ "my-sse-mcp": { "transport": "sse", "url": "http://127.0.0.1:8082/sse", "description": "SSE MCP server", "timeout": 30 } } -
Click Save and then Start to connect to the server.
-
Monitor the connection status, tool count, and health in real time.
SSE mode benefits:
- Real-time bidirectional communication via Server-Sent Events
- Suitable for scenarios requiring continuous data streaming
- Lower latency for push-based notifications
A test SSE MCP server is available at cmd/test-sse-mcp-server/ for validation purposes.
Knowledge Base
- Vector search – AI agent can automatically search the knowledge base for relevant security knowledge during conversations using the
search_knowledge_basetool. - Hybrid retrieval – combines vector similarity search with keyword matching for better accuracy.
- Auto-indexing – scans the
knowledge_base/directory for Markdown files and automatically indexes them with embeddings. - Web management – create, update, delete knowledge items through the web UI, with category-based organization.
- Retrieval logs – tracks all knowledge retrieval operations for audit and debugging.
Quick Start (Using Pre-built Knowledge Base):
- Download the knowledge database – Download the pre-built knowledge database file from GitHub Releases.
- Extract and place – Extract the downloaded knowledge database file (
knowledge.db) and place it in the project'sdata/directory. - Restart the service – Restart the CyberStrikeAI service, and the knowledge base will be ready to use immediately without rebuilding the index.
Setting up the knowledge base:
- Enable in config – set
knowledge.enabled: trueinconfig.yaml:knowledge: enabled: true base_path: knowledge_base embedding: provider: openai model: text-embedding-v4 base_url: "https://api.openai.com/v1" # or your embedding API api_key: "sk-xxx" retrieval: top_k: 5 similarity_threshold: 0.7 hybrid_weight: 0.7 - Add knowledge files – place Markdown files in
knowledge_base/directory, organized by category (e.g.,knowledge_base/SQL Injection/README.md). - Scan and index – use the web UI to scan the knowledge base directory, which will automatically import files and build vector embeddings.
- Use in conversations – the AI agent will automatically use
search_knowledge_basewhen it needs security knowledge. You can also explicitly ask: "Search the knowledge base for SQL injection techniques".
Knowledge base structure:
- Files are organized by category (directory name becomes the category).
- Each Markdown file becomes a knowledge item with automatic chunking for vector search.
- The system supports incremental updates – modified files are re-indexed automatically.
Automation Hooks
- REST APIs – everything the UI uses (auth, conversations, tool runs, monitor, vulnerabilities, roles) is available over JSON.
- Role APIs – manage security testing roles via
/api/rolesendpoints:GET /api/roles(list all roles),GET /api/roles/:name(get role),POST /api/roles(create role),PUT /api/roles/:name(update role),DELETE /api/roles/:name(delete role). Roles are stored as YAML files in theroles/directory and support hot-reload. - Vulnerability APIs – manage vulnerabilities via
/api/vulnerabilitiesendpoints:GET /api/vulnerabilities(list with filters),POST /api/vulnerabilities(create),GET /api/vulnerabilities/:id(get),PUT /api/vulnerabilities/:id(update),DELETE /api/vulnerabilities/:id(delete),GET /api/vulnerabilities/stats(statistics). - Batch Task APIs – manage batch task queues via
/api/batch-tasksendpoints:POST /api/batch-tasks(create queue),GET /api/batch-tasks(list queues),GET /api/batch-tasks/:queueId(get queue),POST /api/batch-tasks/:queueId/start(start execution),POST /api/batch-tasks/:queueId/cancel(cancel),DELETE /api/batch-tasks/:queueId(delete),POST /api/batch-tasks/:queueId/tasks(add task),PUT /api/batch-tasks/:queueId/tasks/:taskId(update task),DELETE /api/batch-tasks/:queueId/tasks/:taskId(delete task). Tasks execute sequentially, each creating a separate conversation with full status tracking. - Task control – pause/resume/stop long scans, re-run steps with new params, or stream transcripts.
- Audit & security – rotate passwords via
/api/auth/change-password, enforce short-lived sessions, and restrict MCP ports at the network layer when exposing the service.
Configuration Reference
auth:
password: "change-me"
session_duration_hours: 12
server:
host: "0.0.0.0"
port: 8080
log:
level: "info"
output: "stdout"
mcp:
enabled: true
host: "0.0.0.0"
port: 8081
openai:
api_key: "sk-xxx"
base_url: "https://api.deepseek.com/v1"
model: "deepseek-chat"
database:
path: "data/conversations.db"
knowledge_db_path: "data/knowledge.db" # Optional: separate DB for knowledge base
security:
tools_dir: "tools"
knowledge:
enabled: false # Enable knowledge base feature
base_path: "knowledge_base" # Path to knowledge base directory
embedding:
provider: "openai" # Embedding provider (currently only "openai")
model: "text-embedding-v4" # Embedding model name
base_url: "" # Leave empty to use OpenAI base_url
api_key: "" # Leave empty to use OpenAI api_key
retrieval:
top_k: 5 # Number of top results to return
similarity_threshold: 0.7 # Minimum similarity score (0-1)
hybrid_weight: 0.7 # Weight for vector search (1.0 = pure vector, 0.0 = pure keyword)
roles_dir: "roles" # Role configuration directory (relative to config file)
Tool Definition Example (tools/nmap.yaml)
name: "nmap"
command: "nmap"
args: ["-sT", "-sV", "-sC"]
enabled: true
short_description: "Network mapping & service fingerprinting"
parameters:
- name: "target"
type: "string"
description: "IP or domain"
required: true
position: 0
- name: "ports"
type: "string"
flag: "-p"
description: "Range, e.g. 1-1000"
Role Definition Example (roles/penetration-testing.yaml)
name: Penetration Testing
description: Professional penetration testing expert for comprehensive security testing
user_prompt: You are a professional cybersecurity penetration testing expert. Please use professional penetration testing methods and tools to conduct comprehensive security testing on targets, including but not limited to SQL injection, XSS, CSRF, file inclusion, command execution and other common vulnerabilities.
icon: "\U0001F3AF"
tools:
- nmap
- sqlmap
- nuclei
- burpsuite
- metasploit
- httpx
- record_vulnerability
- list_knowledge_risk_types
- search_knowledge_base
enabled: true
Project Layout
CyberStrikeAI/
├── cmd/ # Server, MCP stdio entrypoints, tooling
├── internal/ # Agent, MCP core, handlers, security executor
├── web/ # Static SPA + templates
├── tools/ # YAML tool recipes (100+ examples provided)
├── roles/ # Role configurations (12+ predefined security testing roles)
├── img/ # Docs screenshots & diagrams
├── config.yaml # Runtime configuration
├── run.sh # Convenience launcher
└── README*.md
Basic Usage Examples
Scan open ports on 192.168.1.1
Perform a comprehensive port scan on 192.168.1.1 focusing on 80,443,22
Check if https://example.com/page?id=1 is vulnerable to SQL injection
Scan https://example.com for hidden directories and outdated software
Enumerate subdomains for example.com, then run nuclei against the results
Advanced Playbooks
Load the recon-engagement template, run amass/subfinder, then brute-force dirs on every live host.
Use external Burp-based MCP server for authenticated traffic replay, then pass findings back for graphing.
Compress the 5 MB nuclei report, summarize critical CVEs, and attach the artifact to the conversation.
Build an attack chain for the latest engagement and export the node list with severity >= high.
Changelog
See CHANGELOG.md for detailed version history and all changes.
Recent Highlights
- 2026-01-11 – Role-based testing with predefined security testing roles
- 2026-01-08 – SSE transport mode support for external MCP servers
- 2026-01-01 – Batch task management with queue-based execution
- 2025-12-25 – Vulnerability management and conversation grouping features
- 2025-12-20 – Knowledge base with vector search and hybrid retrieval
Star History
404Starlink
CyberStrikeAI has joined 404Starlink
TCH Top-Ranked Intelligent Pentest Project
Need help or want to contribute? Open an issue or PR—community tooling additions are welcome!