ml_privacy_meter
ml_privacy_meter copied to clipboard
MIA blackbox attack accuracy repeats same value
Hi if it seems that the MIA can't be successful I get often an attack accuracy that is exactly: 0.500200092792511 Did you recognize this value in your experiments, too? If you did is there a known reason why the implementation returns exactly this value? Im using the AlexNet tutorial python file with following setting:
input_shape = (32, 32, 3)
cmodelA = tf.keras.models.load_model(cprefix)
saved_path = "datasets/cifar100_train.txt.npy"
dataset_path = 'datasets/cifar100.txt'
datahandlerA = ml_privacy_meter.utils.attack_data.attack_data(dataset_path=dataset_path,
member_dataset_path=saved_path,
batch_size=100,
attack_percentage=10, input_shape=input_shape,
normalization=True)
attackobj = ml_privacy_meter.attack.meminf.initialize(
target_train_model=cmodelA,
target_attack_model=cmodelA,
train_datahandler=datahandlerA,
attack_datahandler=datahandlerA,
layers_to_exploit=[72], # last layer of my ResNet20
device=None, epochs=3, model_name=cprefix)
Hi @chris-prenode, just to confirm when you say:
if it seems that the MIA can't be successful
You are speaking about the attack accuracy?
Hi @chris-prenode, just to confirm when you say:
if it seems that the MIA can't be successful
You are speaking about the attack accuracy?
Yes, I do.
In my experiment, if the attack only uses the output of the last layer and the ohe_label (black-box setting), the attack accuracy of the attack model in the verification set is very low (50%). This makes me feel very confused. The black-box attack accuracy mentioned in the paper is 74.6%.