eirli
eirli copied to clipboard
CIFAR-10 evaluation
Adds run_cifar.py, which runs representation learning with a ResNet-18 on CIFAR-10, finetunes a linear layer on top of it, and evaluates the accuracy of the resulting classifier. Hyperparameters are set to mimic SimCLR: https://github.com/google-research/simclr/
The current implementation still depends on an incorrect loss function (to be fixed in #10 ) and augments the examples at a batch level instead of a dataset level (to be fixed in a different PR).
The code looks good to me!
Just that when I try to plot the lr curve, it seems the linear scaling part's minimum lr is eta_min and the cosine part's minimum lr is 0 (I've seen people who end the cosine part at eta_min). I'm not sure if this is your intended behavior, but if it is then you can ignore this comment :)

The line is produced by:
import torchvision.models as models
resnet18 = models.resnet18()
optimizer = torch.optim.Adam(resnet18.parameters(), lr=0.03)
scheduler = LinearWarmupCosine(optimizer, 5, 30)
lr = []
for _ in range(30):
optimizer.step()
lr.append(optimizer.param_groups[0]['lr'])
scheduler.step()
plot([lr], ['lr'], 'lr', 'test_scheduler')