openai-cookbook
openai-cookbook copied to clipboard
Cookbook for insecure code detection
trafficstars
Summary
This PR adds a cookbook and dataset for vulnerability detection proposed in #1100
Motivation
LLMs like GPT4 have shown proficiency in classifying code as secure or insecure. This notebook demonstrates prompts that improve classification accuracy from 67% to 80%, potentially helping developers with their secure coding practices. For more details, refer to this blog post. This use case demonstrates:
- Using LLMs to identify and/or correct software vulnerabilities
- Experimenting with multiple prompt techniques (zero-shot, few-shot, KNN few-shot) and measuring performance impacts
- Using the OpenAI API to perform binary classification
For new content
- [x] I have added a new entry in registry.yaml (and, optionally, in authors.yaml) so that my content renders on the cookbook website.
- [x] I have conducted a self-review of my content based on the contribution guidelines:
- [x] Relevance: This content is related to building with OpenAI technologies and is useful to others.
- [x] Uniqueness: I have searched for related examples in the OpenAI Cookbook, and verified that my content offers new insights or unique information compared to existing documentation.
- [x] Spelling and Grammar: I have checked for spelling or grammatical mistakes.
- [x] Clarity: I have done a final read-through and verified that my submission is well-organized and easy to understand.
- [x] Correctness: The information I include is correct and all of my code executes successfully.
- [x] Completeness: I have explained everything fully, including all necessary references and citations.