DenseNet
DenseNet copied to clipboard
DenseNet on ImageNet
I've just read your paper which is really interesting. I was wondering whether you tried learning a DenseNet version on ImageNet ? Thank you
Thanks for your interest in DenseNet.
We are experimenting on ImageNet with different model sizes. Right now we have some preliminary results (relatively small models), which is shown in the figures below.
As shown in the figure, DenseNet with the same amount of parameters or computation cost(measured in #flops) as ResNet has lower validation error. The DenseNets in the figures have growthrate = 32. The error of ResNet is copied from results reported by fb.resnet.torch. All the hyperparameters are also kept the same as theirs. When all the models are finished we'll update the paper and Readme with ImageNet results.
The DenseNet architecture used in ImageNet is different from what we used in CIFAR and SVHN dataset. The differences are listed below:
- The major difference is we used "bottleneck structure", which is inspired by the ResNet paper. In each layer, before producing new feature maps through 33 convolution on previous layers' feature maps, a 11 convolution with output size 4*growthRate is performed.
- In transition layers we halved the number of feature maps.
- Following the design strategy of ResNet on ImageNet, we use 4 dense blocks, and they have different depths.
Thanks for your answer. That sounds very promising !
Great results. When will you release the prototxt file for imagenet?
@baiyancheng20 sorry this is trained using torch. If you want to use them we can give you torch model definitions first (or pre-trained models later).
Model definition here: densenet-imagenet.txt
After we get the full results we'll include imagenet models in both Torch and caffe repos.
@liuzhuang13 Thank you for sharing the code. Densenet is a very interesting work. I will try to use the code for Cifar to train on Imagenet Dataset. Thanks a lot.
@baiyancheng20 Thanks for your interest. In order to get better performance, you may want to adapt the caffe code for CIFAR a little bit according to the differences I listed above. For more detail you can refer to the torch code.
Model definition here: densenet-imagenet.txt
The network model in the 'densenet-imagenet.txt' seems to be different from the paper. In the paper, DenseNet 169 has four dense blocks of size {6, 12, 32, 32} but the file has {6, 12, 48, 16}. Does that make a big difference? I am trying to train the network for Imagenet but convergence curve after the first 32 epochs does not look great (I am using fb.resnet.torch setup and just specified this network type via the -netType).
Thanks Ganesh
Sorry, the file was wrong, it was probably an older version. I'll correct it. Sorry but I couldn't remember whether this would make a big difference.
Also, there was pretrained models available in the readme page, in case your purpose is just to use a pretrained model.
Thanks for the reply. No problem at all -- just wanted to confirm.
Thank you for uploading the pre-trained models. They have been very helpful but I did want to train a model for a different study I was doing.
Hi! I'm trying to train DenseNet-121-BC on the ImageNet (my own implementation) and I am just wondering weather the training curves I'm getting are any close to what it looked like for you. It would be great if you could share some of them for comparison or give me your opinion on mine results.
In this setup one epoch lasts for roughly 25.6k iterations, so above you can see around 10 epochs (I'm using just one GPU for trining), those are the params that I'm using:
Thanks!
Yes it would be great if the authors could post their convergence curves -- I tried to train with fb.resnet.torch repo where I just replace netType to DenseNet but my initial training curve looked weird. It would helpful if I had a curve to compare to so that I will know if it is expected or I am doing something wrong. Thanks in advance for the help!
Could you post the prototxt files used for training DenseNet's in caffe?
It would be great to check and make some changes to it to experiment.
Hi, @nihalgoalla please check https://github.com/liuzhuang13/DenseNetCaffe (for CIFAR, without BC structure) and https://github.com/shicai/DenseNet-Caffe (for ImageNet).
Hey,
Thanks for the information.
But what I was looking for was the last layers you added, the loss and accuracy layers, including the solver prototxt's.
If it is possible, could you share those with me.
Thanks, Nihal Goalla, IIT Delhi.
On Mon, Mar 6, 2017 at 8:14 PM, Zhuang Liu [email protected] wrote:
Hi, @nihalgoalla https://github.com/nihalgoalla please check https://github.com/liuzhuang13/DenseNetCaffe (for CIFAR, without BC structure) and https://github.com/shicai/DenseNet-Caffe (for ImageNet).
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/liuzhuang13/DenseNet/issues/7#issuecomment-284415450, or mute the thread https://github.com/notifications/unsubscribe-auth/AF62m2k-gEUljUVgwNjV-lFIM6pyNhdoks5rjBvogaJpZM4KW1zX .
@nihalgoalla At https://github.com/liuzhuang13/DenseNetCaffe, we have a solver prototxt file (for CIFAR training) and an example prototxt file that contains the last layers. Thanks
@liuzhuang13 Did you scale the ImageNet images to [0,1]
before feeding to DenseNet?
Hi, I tried to extract image features using DenseNet-121 which is pre-trained (ImageNet). What would be the shape of the output features?
Hi @liuzhuang13, thanks for your great work in dense net. Comparing to resnet, I wonder why you choose concatenate but not the add function in original resnet. hope to hear you soon.
Hi @JieMEI1994, I think this is discussed in the section 5 of the paper.
Thank you.