AIF360 icon indicating copy to clipboard operation
AIF360 copied to clipboard

Learning Fair Representations Issue

Open MihirMishra23 opened this issue 3 years ago • 2 comments

The Learning Fair Representations class (LFR) parameters are LFR(unprivileged_groups, privileged_groups, k=5, Ax=0.01, Ay=1.0, Az=50.0, print_interval=250, verbose=0, seed=None). The only required parameters are unprivileged_groups and privileged_groups, so I have used LFR = preprocessing.LFR(unprivileged_groups, privileged_groups) to initialize the LFR.

However, when I run the LFR fit_transform and then retrain and test the model (sklearn's Random Forest) on the updated dataset, It now only predicts positive outcomes for every single datapoint. I have played around with the initialization of LFR, but no changes to that have made a difference.

Do you have any suggestions on how to fix the LFR so that it doesn't predict only positive outcomes?

MihirMishra23 avatar Jul 15 '21 18:07 MihirMishra23

Got the same problem too. Following closely...

Brianjoroge avatar Aug 11 '21 08:08 Brianjoroge

Hi, was this issue resolved? I'm facing the same problem

REncona avatar May 08 '22 20:05 REncona

Hi,

I ran into a similar problem with the German dataset. This is likely because the LFR code uses a binary thresholding (0.5) to transform predictions into 0. or 1. This causes the inconsistency because the German dataset target labels are 1. and 2.

After recoding the dataset labels and initializing the LFR model with appropriate parameters (i used k=3, Ax=0.001, Ay=0.1, Az=0.01 ), this is my output from the compute_metrics function:

([('Balanced accuracy', 1.0), ('Statistical parity difference', 0.003532182103610615), ('Disparate impact', 1.00376254180602), ('Average odds difference', 0.0), ('Equal opportunity difference', 0.0), ('Theil index', 0.0)])

Hope this helps

NiloyPurkait avatar May 08 '24 13:05 NiloyPurkait