slimming icon indicating copy to clipboard operation
slimming copied to clipboard

cifar10 flops higher than cifar100 on DenseNet(40% pruned)

Open Sirius083 opened this issue 6 years ago • 1 comments

Thanks for your great work, I have a small question related with calculating flops In paper Table 1 cifar10 DenseNet-40 (40% Pruned), model FLOPs is 3.8110^8 cifar100 DenseNet-40 (40% Pruned), model FLOPS is 3.7110^8 Since cifar100 has 100 classes , while cifar10 has 10 classes Why is the flops in cifar10 higher than flops in cifar100 in the same model Thanks in advance

Sirius083 avatar May 28 '19 12:05 Sirius083

Because these are two different models, and the algorithm prunes different part of the networks. Even if you prune a fixed amount of channels (40% in this case), FLOPs will be dependent on where you prune. For example, if you prune early layers more, you'll reduce more FLOPs since they have larger activation maps.

liuzhuang13 avatar Oct 25 '19 02:10 liuzhuang13