prompt-injection topic
llm-confidentiality
Whispers in the Machine: Confidentiality in LLM-integrated Systems
lakera-gandalf-solutions
My inputs for the LLM Gandalf made by Lakera
tensor-trust
A prompt injection game to collect data for robust ML research
Prompt-Injection-Testing-Tool
The Prompt Injection Testing Tool is a Python script designed to assess the security of your AI system's prompt handling against a predefined list of user prompts commonly used for injection attacks....
llm-security
Dropbox LLM Security research code and results
Awesome_GPT_Super_Prompting
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
Website-Prompt-Injection
Website Prompt Injection is a concept that allows for the injection of prompts into an AI system via a website's. This technique exploits the interaction between users, websites, and AI systems to exe...
Image-Prompt-Injection
Image Prompt Injection is a Python script that demonstrates how to embed a secret prompt within an image using steganography techniques. This hidden prompt can be later extracted by an AI system for a...
pytector
A Python package designed to detect prompt injection in text inputs utilizing state-of-the-art machine learning models from Hugging Face. The main focus is on ease of use, enabling developers to integ...
Open-Prompt-Injection
Prompt injection attacks and defenses in LLM-integrated applications