llm-security topic
TaaC-AI
AI-driven Threat modeling-as-a-Code (TaaC-AI)
raga-llm-hub
Framework for LLM evaluation, guardrails and security
Open-Prompt-Injection
This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses
last_layer
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
pint-benchmark
A benchmark for prompt injection detection systems.
fast-llm-security-guardrails
The fastest && easiest LLM security and privacy guardrails for GenAI apps.
LLM-security-and-privacy
LLM security and privacy
chatgpt-plugin-eval
LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins
agentic_security
Agentic LLM Vulnerability Scanner / AI red teaming kit