Zhuang Liu

Results 71 comments of Zhuang Liu

@taki0112 1. Most people train a network with less than 5 layers and achieve very high accuracy on MNIST because it is such a simple dataset. If you train a...

@John1231983 Because ImageNet is big and also because we use heavy data augmentation, so we don't use dropout. This is also following our base code framework fb.resnet.torch. For CIFAR10, when...

Thanks for the question. Actually, the feature maps in the first dense block is pooled at the end of the block, at after a convolution, acts the input to the...

Hi, The 0.8M model is only for CIFAR dataset... All ImageNet models are all available at readme page.

https://github.com/shicai/DenseNet-Caffe

Hi @yuffon I think we just subtract the training means and stds in all testing, this is standard in machine learning.

I don't think one should preprocess the images by its own mean and stds, that causes different input to be changed by different amounts. CIFAR is not a good dataset...

I think the common practice is to just normalize the data as the way it is normalized during training (and using the training stats).

Thanks and we meant "each GPU use 32, sum is 128".

Actually the "batch_size" in the code means the total batch size. So if you want in total batch size 128 just set batch_size = 128.