FEAT Add LibrAI/do-not-answer Dataset
Name: LibrAI/do-not-answer
Link: https://huggingface.co/datasets/LibrAI/do-not-answer
Relevant Columns: "risk_area","types_of_harm","specific_harms","question"
Originally posted by @divyaamin9825 in #429
Describe the solution you'd like
This dataset should be available within PyRIT: https://huggingface.co/datasets/LibrAI/do-not-answer Also available here: https://github.com/libr-ai/do-not-answer Associated paper: https://arxiv.org/abs/2308.13387
Additional context
There are examples of how PyRIT interacts with other datasets here: https://github.com/search?q=repo%3AAzure%2FPyRIT%20%23%20The%20dataset%20sources%20can%20be%20found%20at%3A&type=code
[[Content Warning: Prompts are aimed at provoking the model, and may contain offensive content.]] Additional Disclaimer: Given the content of these prompts, keep in mind that you may want to check with your relevant legal department before trying them against LLMs.