SecLists
SecLists copied to clipboard
feat (LLM-testing): Add directories and files related to LLM security testing.
These prompts are crafted to challenge the models in various ways, including but not limited to their ability to follow ethical guidelines, maintain data privacy, resist generating harmful or sensitive content, and avoid being exploited to perform unauthorized tasks.
- Please feel free to change the directory location or name to something more appropriate!
Scalable Extraction of Training Data from (Production) Language Models.pdf LLM Hacker Handbook
Thanks for making a pull request! Some of these prompts look really interesting. They will certainly be of use to AI security engineers.
Theres now a merge conflict now @emmanuel-londono
@ItsIgnacioPortal Thanks for your suggestions; I've applied them!
@g0tmi1k Conflicts should be resolved!
I've opened a final pull-request in your fork of SecLists. After that PR is merged, I believe this PR will be ready for merging. Again, thank you for contributing @emmanuel-londono!
@ItsIgnacioPortal Merged!