EluCD icon indicating copy to clipboard operation
EluCD copied to clipboard

ECE computation

Open Felix-Petersen opened this issue 1 year ago • 1 comments

How do you compute the ECE in the code? I looked through the entire EDM code, but I couldn't find it.

Also, a typo in the paper: Table 5, caption: You do not report precision / recall, so you could remove that from the caption.

Felix-Petersen avatar Mar 17 '24 05:03 Felix-Petersen

Thank you so much for pointing out the typo! I originally added precision and recall but the content is too wide to present properly in the table. Below are the full results:

ImageNet 128x128 Classifier IS FID Pre Rec
Diffusion baseline - - 5.91 0.70 0.65
Diffusion Finetune guided Fine-tune 182.69 2.97 0.78 0.59
Classifier-free Diffusion - 158.47 2.43 - -
Diffusion ResNet50 guided (ours) Off-the-Shelf 183.51 2.36 0.77 0.60
Diffusion ResNet101 guided (ours) Off-the-Shelf 187.83 2.19 0.79 0.58

I am sorry that I currently do not have access to the codes since I left the company where I did the research intern. But I can share the code function for computing ECE, basically, we apply this ECE function during the diffusion backward process for estimating the ECE score of the classifer's predicted probability .

import numpy as np

# get ECE score
def get_ece_score(py, y_test, n_bins=10):
    py = np.asarray(py)
    y_test = np.asarray(y_test)
    if y_test.ndim > 1:
        y_test = np.argmax(y_test, axis=1)
    py_index = np.argmax(py, axis=1)
    py_value = []
    for i in range(py.shape[0]):
        py_value.append(py[i, py_index[i]])
    py_value = np.array(py_value)
    acc, conf = np.zeros(n_bins), np.zeros(n_bins)
    Bm = np.zeros(n_bins)
    for m in range(n_bins):
        a, b = m/n_bins, (m+1)/n_bins
        for i in range(py.shape[0]):
            if py_value[i] > a and py_value[i] <= b:
                Bm[m] += 1
                if py_index[i] == y_test[i]:
                    acc[m] += 1
                conf[m] += py_value[i]
        if Bm[m] != 0:
            acc[m] = acc[m] / Bm[m]
            conf[m] = conf[m] / Bm[m]
    ece = 0
    for m in range(n_bins):
        ece += Bm[m] * np.abs((acc[m] - conf[m]))
    return ece / sum(Bm)

# total_pre_probs_array: softmax probability matrix
# total_onehot_labels_array: truth label one-hot matrix
ece_score = get_ece_score(py=total_pre_probs_array, y_test=total_onehot_labels_array, n_bins=10)

AlexMaOLS avatar Mar 24 '24 12:03 AlexMaOLS