imagenette icon indicating copy to clipboard operation
imagenette copied to clipboard

Imagewoof - Resnet18/34 Training

Open AhmedHussKhalifa opened this issue 4 years ago • 1 comments

Hey, I am trying to produce a baseline model for imagewoof as it is hard to find a pretrained model. I trained both resnet-18/34 by using Adam optimizer. I have reached 81.55/82.62% for top-1 accuracy.

I found some papers reporting that imagenette and imagewoof are 13k samples for training, where I expect this is would be the first version.

  1. Could anyone post a link for the 1st version of them? I want to reproduce their results.

  2. Could we use both datasets to fine-tune the hyperparameters for any training algorithm to generalize it on ImageNet?

  3. Is Imagenette and imagewoof training sets contains some samples from the training set? I checked the files content and I found some but I need someone to confirm this.

image Ref: Data Efficient Stagewise Knowledge Distillation

AhmedHussKhalifa avatar Aug 18 '21 16:08 AhmedHussKhalifa

I know this is outdated, but as for Q2. I'm actually investigating training algorithms on ImageNet-1k and Imagenette and how their accuracies compare. So far I haven't found a good correlation between the results on both datasets, e.g. VGG-11 performs essentially onpar ResNeXt-32x4d-50 when benchmarked on Imagenette with the same fixed training parameters. ConvNeXt performs pretty poorly on Imagenette etc. Bias-Variance trade-off plays a huge role I imagine.

Obviously I'm looking more into this but it seems like a correlation between the two just doesn't exist. I'm also looking at ImageWoof moving forward, maybe the fine-grained classes provide indicative performance.

MaxVanDijck avatar May 26 '22 04:05 MaxVanDijck