ml_privacy_meter icon indicating copy to clipboard operation
ml_privacy_meter copied to clipboard

Privacy Meter: An open-source library to audit data privacy in statistical and machine learning algorithms.

Results 36 ml_privacy_meter issues
Sort by recently updated
recently updated
newest added

First of all thanks a lot for your open source contributions, then I'm having some problems browsing the code. You set num_datapoints (default-5k) as the number of training data for...

Will you develop versions based on Federated Learning and Unsupervised Learning in the future? I am very interested in your articles.

Opening a PR for feedback.

Bumps [numpy](https://github.com/numpy/numpy) from 1.21.0 to 1.22.0. Release notes Sourced from numpy's releases. v1.22.0 NumPy 1.22.0 Release Notes NumPy 1.22.0 is a big release featuring the work of 153 contributors spread...

dependencies

Bumps [numpy](https://github.com/numpy/numpy) from 1.18.1 to 1.22.0. Release notes Sourced from numpy's releases. v1.22.0 NumPy 1.22.0 Release Notes NumPy 1.22.0 is a big release featuring the work of 153 contributors spread...

dependencies

The fix has been demonstrated in the reference metric tutorial.

Hi all, I'll try to attack my pre-trained [ResNet20.zip] https://github.com/privacytrustlab/ml_privacy_meter/files/6685533/ResNet20.zip) model with the following model architecture: [ResNet20_architecture.txt](https://github.com/privacytrustlab/ml_privacy_meter/files/6685433/ResNet20_architecture.txt) For training, I used the same procedure as in the tutorial suggested. To...

Hi, I would like to know if there is any pytorch implementation on this. Or if there are any future works on this in pytorch

I am implementing a blackbox attack against the basic binary TensorFlow classifier with tabular data below. Here is the notebook: [credit_default.ipynb.zip](https://github.com/privacytrustlab/ml_privacy_meter/files/8495835/credit_default.ipynb.zip) It errors out due to a size-incompatibility during the...

Hi, I'm reading the paper [Enhanced Membership Inference Attacks against Machine Learning Models](https://arxiv.org/pdf/2111.09679.pdf). It is very well written! I'm wondering if you have plan to open-source the code of "MIA...