New CS proposal: AI Agent Security Cheat Sheet
What is the proposed Cheat Sheet about?
The cheat sheet needs to provide security of AI Agent. Unlike simple chatbots or basic LLM applications, agents operate with increased autonomy: executing code, calling APIs, browsing the web, managing files, and interacting with external services. So security of the agent is different then AI Ops or LLM security Cheat Sheets.
What security issues are commonly encountered related to this area?
Prompt Injection - Direct/Indirect Tool Abuse Sensitive Data Leak Memory Poisoning Denial of Wallet (DoW) ...
What is the objective of the Cheat Sheet?
The objective is to provide developers:
Clear understanding of the unique attack surface that AI agents present compared to traditional applications or basic LLM chatbots Actionable security items Practical do's and don'ts checklist
What other resources exist in this area?
- OWASP Top 10 for LLM Applications (https://owasp.org/www-project-top-10-for-large-language-model-applications/) : Covers general LLM risks but lacks agent-specific guidance
- LLM Prompt Injection Prevention Cheat Sheet (https://cheatsheetseries.owasp.org/cheatsheets/LLM_Prompt_Injection_Prevention_Cheat_Sheet.html): Focuses on prompt injection but doesn't cover tool security, memory, or multi-agent scenarios
- (Secure AI Model Ops Cheat Sheet) https://cheatsheetseries.owasp.org/cheatsheets/Secure_AI_Model_Ops_Cheat_Sheet.html): Covers ML model operations, not agent runtime security
None of these resources provide actionable security guidance specifically for agentic AI systems with tool use, persistent memory, and autonomous action capabilities.
I think this topic is definitely worthy of a cheat sheet.
Then let's do this! https://github.com/OWASP/CheatSheetSeries/pull/1926 :)
Great idea, it would be nice to collaborate on this one.