imagenette
imagenette copied to clipboard
Imagewoof - Resnet18/34 Training
Hey, I am trying to produce a baseline model for imagewoof as it is hard to find a pretrained model. I trained both resnet-18/34 by using Adam optimizer. I have reached 81.55/82.62% for top-1 accuracy.
I found some papers reporting that imagenette and imagewoof are 13k samples for training, where I expect this is would be the first version.
-
Could anyone post a link for the 1st version of them? I want to reproduce their results.
-
Could we use both datasets to fine-tune the hyperparameters for any training algorithm to generalize it on ImageNet?
-
Is Imagenette and imagewoof training sets contains some samples from the training set? I checked the files content and I found some but I need someone to confirm this.
Ref: Data Efficient Stagewise Knowledge Distillation
I know this is outdated, but as for Q2. I'm actually investigating training algorithms on ImageNet-1k and Imagenette and how their accuracies compare. So far I haven't found a good correlation between the results on both datasets, e.g. VGG-11 performs essentially onpar ResNeXt-32x4d-50 when benchmarked on Imagenette with the same fixed training parameters. ConvNeXt performs pretty poorly on Imagenette etc. Bias-Variance trade-off plays a huge role I imagine.
Obviously I'm looking more into this but it seems like a correlation between the two just doesn't exist. I'm also looking at ImageWoof moving forward, maybe the fine-grained classes provide indicative performance.