LLM-Prompt-Vulnerabilities
LLM-Prompt-Vulnerabilities copied to clipboard
Prompts Methods to find the vulnerabilities in Generative Models

LLM & Prompt Vulnerabilities
Finding and documentating vulnerabilities in Generative Models based on prompt-engineering
Name | Description | proof |
---|---|---|
Prompt In the Middle (PITM)? | Injecting prompt to access other's output | [Proof] |
Nested Prompt Attack (Need a better name :D) | While Providing nested prompts, the model ignores the initial instructions | [Proof] |