causal-bert-pytorch icon indicating copy to clipboard operation
causal-bert-pytorch copied to clipboard

Possible reversed subtraction in the `.ATE()` method

Open AlxndrMlk opened this issue 1 year ago • 0 comments

Hi @rpryzant

Thank you for sharing your implementation of CausalBERT!

I tried it recently. I created a number of synthetic datasets and I found the results surprising when using the .ATE() method.

The effect was systematically reversed comparing to the expectation.

I went through the code and it seems to me that there's a possible reversed subtraction in the return statement of the .ATE() method:

    def ATE(self, C, W, Y=None, platt_scaling=False):
        Q_probs, _, Ys = self.inference(W, C, outcome=Y)
        if platt_scaling and Y is not None:
            Q0 = platt_scale(Ys, Q_probs[:, 0])[:, 0]
            Q1 = platt_scale(Ys, Q_probs[:, 1])[:, 1]
        else:
            Q0 = Q_probs[:, 0]
            Q1 = Q_probs[:, 1]

        return np.mean(Q0 - Q1)

According to the original paper Q1 represents the outcome under the treatment (Y|do(T=1)), while Q0 the outcome under no treatment (Y|do(T=0)).

We usually define ATE as E[P(Y|do(T=1)) - P(Y|do(T=0))]

The .ATE() method returns np.mean(Q0 - Q1) which seems to be reversed subtraction.

What are your thoughts on this?

AlxndrMlk avatar Apr 26 '23 13:04 AlxndrMlk