LLM-security-and-privacy
LLM-security-and-privacy copied to clipboard
LLM security and privacy
LLM Security and Privacy
A curated list of papers and tools covering LLM threats and vulnerabilities, both from a security and privacy standpoint. Summaries, key takeaway points, and additional details for each paper are found in the paper-summaries folder.
main.bib file contains the latest citations of the papers listed here.
Overview Figure: A taxonomy of current security and privacy threats against deep learning models and consecutively Large Language Models (LLMs).
Table of Contents
- LLM Security and Privacy
- Table of Contents
- Papers
- Frameworks & Taxonomies
- Tools
- News Articles, Blog Posts, and Talks
- Contributing
- Contact
Papers
Frameworks & Taxonomies
- OWASP Top 10 for Large Language Model Applications
- MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems)
- NIST AI 100-2 E2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations
Tools
News Articles, Blog Posts, and Talks
- Is Generative AI Dangerous?
- Adversarial examples in the age of ChatGPT
- LLMs in Security: Demos vs Deployment?
- Free AI Programs Prone to Security Risks, Researchers Say
- Why 'Good AI' Is Likely The Antidote To The New Era Of AI Cybercrime
- Meet PassGPT, the AI Trained on Millions of Leaked Passwords
Contributing
If you are interested in contributing to this repository, please see CONTRIBUTING.md for details on the guidelines.
A list of current contributors is found HERE.
Contact
For any questions regarding this repository and/or potential (research) collaborations please contact Briland Hitaj.