temperature_scaling icon indicating copy to clipboard operation
temperature_scaling copied to clipboard

ECE Increasing

Open austinmw opened this issue 3 years ago • 8 comments

Hi,

I ran this with a very simple 10 layer CNN model I trained on MNIST using pytorch lightning.

orig_model = pl_module.model
val_loader = trainer.datamodule.val_dataloader()
scaled_model = ModelWithTemperature(orig_model)
scaled_model.set_temperature(val_loader)

But the ECE ends up increasing instead of decreasing:

Before temperature - NLL: 0.645, ECE: 0.271 Optimal temperature: 1.229 After temperature - NLL: 0.779, ECE: 0.351

Any idea why this could be?

austinmw avatar Mar 08 '22 17:03 austinmw

same for me : Before temperature - NLL: 0.058, ECE: 0.002 Optimal temperature: 1.316 After temperature - NLL: 0.061, ECE: 0.010

Liel-leman avatar Apr 27 '22 18:04 Liel-leman

Check if Model output is logits vector or softmax probs @NoSleepDeveloper @austinmw

dwil2444 avatar Sep 01 '22 17:09 dwil2444

same applies for me, the model is output logit vector, not softmax

RobbenRibery avatar Oct 26 '22 16:10 RobbenRibery

I'm wondering if I could use ECE as optimization goal rather than NLL, if the overhead is not large? (Since there is problem above)

zhangyx0417 avatar Apr 24 '23 05:04 zhangyx0417

I don't think ECE is differenable bro

RobbenRibery avatar Apr 24 '23 08:04 RobbenRibery

But that being siad, NLL is the metric that we should minise in order to make P(Y=y^|y^=f(x)) = f(x) [perfectly calibrated model, you may think the output probs follow a categorical distribution paramertirsed by f(x) ]

RobbenRibery avatar Apr 24 '23 08:04 RobbenRibery

Try increasing the learning rate or increasing max_iter. Your optimisation needs to converge. In the __init__ function of ModelWithTemperature create an empty list to store the loss i.e.

self.loss = []

then before return loss in the eval function, append loss to the list

self.loss.append(loss.item())

After your call to set_temperature, plot the values in the self.loss list and see if the loss was minimised. The loss curve should taper off to some value that's somewhat constant after convergence.

tomgwasira avatar Jun 20 '23 20:06 tomgwasira

After the optimization has converged, I still fail to get decreasing ECE.

I wonder, is it possible for us to get the optimal temperature by optimizing NLL loss on the validation set? I think it is a little strange.

MengyuanChen21 avatar Nov 14 '23 05:11 MengyuanChen21