dataset-distillation icon indicating copy to clipboard operation
dataset-distillation copied to clipboard

Questions on loss calculation

Open data-science-lover opened this issue 2 years ago • 1 comments

I have several questions about this. First of all, I noticed that the calculation of Loss for a two-class model was treated differently. The model I am using being of two classes, errors were appearing (including memory limits). So I removed this condition to calculate only with the cross entropy F.cross_entropy(output, target) because a binary classification can be treated as multi-class classification:

image

The same thing was done at the different locations where the Loss was calculated.

I have some doubts about the loss values calculated in the train() function of train_distilled_images ... Indeed, the loss increases at each step instead of decreasing... image

image

data-science-lover avatar Jun 10 '22 09:06 data-science-lover

I don't understand because the loss calculated in the test_runner of the main script, gives VERY different values from those obtained in train_distilled_images ... image

So to compare the loss obtained with the distilled images and those obtained with the real dataset should we rather rely on the values of the main.py script or train_distilled_images?

Thank you in advance.

data-science-lover avatar Jun 10 '22 09:06 data-science-lover