ai-security topic
giskard
🐢 Open-Source Evaluation & Testing for ML & LLM systems
AI-Security-and-Privacy-Events
A curated list of academic events on AI Security & Privacy
VulnScan
Performing website vulnerability scanning using OpenAI technologie
sdk-javascript
The official JavaScript SDK for the Modzy Machine Learning Operations (MLOps) Platform.
MIA
Unofficial pytorch implementation of paper: Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures
atlas-data
ATLAS tactics, techniques, and case studies data
llm_rules
RuLES: a benchmark for evaluating rule-following in language models
VideoRLCS
Learning to Identify Critical States for Reinforcement Learning from Videos (Accepted to ICCV'23)
CVPR_2019_PNI
pytorch implementation of Parametric Noise Injection for adversarial defense
Prompt-Injection-Testing-Tool
The Prompt Injection Testing Tool is a Python script designed to assess the security of your AI system's prompt handling against a predefined list of user prompts commonly used for injection attacks....