AIF360
AIF360 copied to clipboard
Learning Fair Representations Issue
The Learning Fair Representations class (LFR) parameters are
LFR(unprivileged_groups, privileged_groups, k=5, Ax=0.01, Ay=1.0, Az=50.0, print_interval=250, verbose=0, seed=None)
. The only required parameters are unprivileged_groups and privileged_groups, so I have used LFR = preprocessing.LFR(unprivileged_groups, privileged_groups)
to initialize the LFR.
However, when I run the LFR fit_transform and then retrain and test the model (sklearn's Random Forest) on the updated dataset, It now only predicts positive outcomes for every single datapoint. I have played around with the initialization of LFR, but no changes to that have made a difference.
Do you have any suggestions on how to fix the LFR so that it doesn't predict only positive outcomes?
Got the same problem too. Following closely...
Hi, was this issue resolved? I'm facing the same problem
Hi,
I ran into a similar problem with the German dataset. This is likely because the LFR code uses a binary thresholding (0.5) to transform predictions into 0. or 1. This causes the inconsistency because the German dataset target labels are 1. and 2.
After recoding the dataset labels and initializing the LFR model with appropriate parameters (i used k=3, Ax=0.001, Ay=0.1, Az=0.01 ), this is my output from the compute_metrics function:
([('Balanced accuracy', 1.0), ('Statistical parity difference', 0.003532182103610615), ('Disparate impact', 1.00376254180602), ('Average odds difference', 0.0), ('Equal opportunity difference', 0.0), ('Theil index', 0.0)])
Hope this helps