private-ai-resources
private-ai-resources copied to clipboard
A better structure?
I'm a little bit confused by subsections Secure Deep Learning and General Research - I think it would make sense to split the awesome-list subsections into current research directions. I propose to split it up into following research directions (partly inspired by the Berkeley view Section 4.2)
- Secure Enclaves / Trusted Hardware
- Differential Privacy
- Adversarial Learning
- Kryptographie
- Shared Learning on confidential data
What do you guys think?
I think restructuring could make the list easier to understand. The reason secure deep learning is separated out from the rest of the research is because that is the main focus of this list and having easy access to those papers is important. Splitting up the research section would make it easier to find papers by topic class but we should make sure to keep the secure dl stuff front and center.
Thanks for the response, I am curious about the difference between Secure Deep Learning and papers in General Research such as The Secret Sharer: Measuring Unintended Neural Network Memorization & Extracting Secrets and Differentially Private Generative Adversarial Network ? I might be mistaken but are you referring to Secure Deep Learning as used by OpenMined?
So the secret sharer paper is more on a theoretical level for describing how neural networks, encrypted or not, leak data. It is not a secure deep learning implentation. The dpgan paper is also on model memorization and not on train on secure data. Typically secure deep learning refers to running deep learning models on data which the model cannot see in plaintext
I see! Would it make sense to rename secure deep learning into encrypted deep learning and have it on the same level as all subsections? I am concerned that it's not a good idea to emphasize one research direction (secure deep learning) from others (e.g privacy-preserving deep learning)