EthicML
EthicML copied to clipboard
Consider letting metrics define how their values are compared
See how algofairness does it:
class DIBinary(Metric):
def __init__(self):
# ...
def calc(self, actual, predicted, dict_of_sensitive_lists, single_sensitive_name,
unprotected_vals, positive_pred):
# ...
return DI
def is_better_than(self, val1, val2):
dist1 = math.fabs(1.0 - val1)
dist2 = math.fabs(1.0 - val2)
return dist1 <= dist2
Each metric has a function is_better_than() that compares two of its results and tells you which one is the better result.