generative-ai-for-beginners icon indicating copy to clipboard operation
generative-ai-for-beginners copied to clipboard

Talk about prompt injection and LLM security

Open simonw opened this issue 1 year ago • 4 comments

This tutorial doesn't yet talk about the security implications of building software on top of LLMs - in particular the prompt injection class of vulnerabilities.

I think this is a problem. Prompt injection is a particularly nasty vulnerability, because if people don't understand it they are almost doomed to build systems that are vulnerable to it.

It also means that a lot of the obvious applications of generative AI are not safe to build. A personal assistant that can summarize and reply to your email for example - that's not safe, because there might be a prompt injection attack in one of the emails that it reads.

I wrote more about this here: https://simonwillison.net/2023/Apr/14/worst-that-can-happen/ (and in this series of posts).

simonw avatar Nov 24 '23 17:11 simonw

👋 Thanks for contributing @simonw! We will review the issue and get back to you soon.

github-actions[bot] avatar Nov 24 '23 17:11 github-actions[bot]

Hey @simonw , great callout! We are working on an additional 4 lessons which includes one one prompt injection / security. Will make sure to point to some of the great content you have given to this community. Keeping this open until we deliver on this.

koreyspace avatar Nov 24 '23 18:11 koreyspace

Great to hear!

simonw avatar Nov 24 '23 18:11 simonw

我就认可

PAN740623 avatar Nov 24 '23 18:11 PAN740623

This issue has not seen any action for a while! Closing for now, but it can be reopened at a later date.

github-actions[bot] avatar Jan 24 '24 08:01 github-actions[bot]