differential-privacy-library icon indicating copy to clipboard operation
differential-privacy-library copied to clipboard

Insecure noise primitives should be marked as such and/or removed entirely

Open TedTed opened this issue 6 months ago • 2 comments

Hi folks,

As you already know, the floating-point noise mechanisms provided by diffprivlib are all vulnerable to precision-based attacks, as described in this blog post or this paper. This also affects the Laplace and Gaussian primitives based on the EuroS&P '21 paper that unsuccessfully attempts to mitigate floating-point vulnerabilities by combining more samples.

I assumed this was broadly common knowledge by now, but I was wrong: a DP tool recently published by Oasis Labs ends up reusing the same method, falling vulnerable to the attack published 2 years ago. It would be nice to more proactively warn people so this kind of thing doesn't happen again. There are multiple ways this could be done.

  • Removing insecure noise addition primitives altogether from diffprivlib, or replacing them with provably robust approaches.
  • Clearly documenting that these primitives are insecure in their docstring.
  • Documenting known vulnerabilities in a specific page of your documentation and/or on the README file.
  • Retracting or otherwise annotating the EuroS&P paper to warn people about the vulnerability.
  • Adding a note to arXiv and/or updating the paper itself to add this warning.

The first three suggestions could also apply to other issues, like the handling of NaN, infinity, or otherwise extremum values.

TedTed avatar Aug 15 '24 09:08 TedTed