AIF360 icon indicating copy to clipboard operation
AIF360 copied to clipboard

adversarial debiasing - how to set statistical parity or equality of odds

Open ngshya opened this issue 3 years ago • 1 comments

Hi there!

In the paper, the authors mentioned the strategy to reach demographic parity or equality of odds by changing slightly the architecture of the network. How can this option be set in the implementation of this repository? Does the network optimize the demographic parity or the equality of odds by default?

In the documentation, it is not mentioned: https://aif360.readthedocs.io/en/latest/modules/generated/aif360.sklearn.inprocessing.AdversarialDebiasing.html#aif360.sklearn.inprocessing.AdversarialDebiasing

Thank you in advance for you answer!

ngshya avatar Sep 14 '21 07:09 ngshya

Olá pessoal!

The original paper says: "For Demographic Parity, the adversary gets the predicted label Ŷ... For Equality of Odds, the adversary gets Ŷ and the true label Y. For Equality of Opportunity on a given class y, we can restrict the training set of the adversary to training examples where Y = y".

The paper evaluates only the Equality of Odds implementation for a classification task.

Reading the AIF360 _adversary_model definition/implementation, we see that it takes as input the pred_logits (Ŷ) and true_labels (Y). I also understand that the AIF360 implemention is exact the described in the original work. This line of code implements the adversarial model described in the paper: pred_protected_attribute_logit = tf.matmul(tf.concat([s, s * true_labels, s * (1.0 - true_labels)], axis=1), W2) + b2.

limafernando avatar Jul 12 '22 20:07 limafernando