Sirius083
Sirius083
I have implemented your code on cifar100 when halving the learning rate, the test loss suddenly decrease , then increase while the validation error keeps unchanged. Did tensorpack maintain a...
I want to implement memory efficient densenet, following the code in [https://github.com/joeyearsley/efficient_densenet_tensorflow/blob/master/models/densenet_creator.py](url), the traing process is stuck at first epoch I have just changed the add_layer part ```python def add_layer(l):...
In Cifar10-densenet.py Line 116: ds = dataset.Cifar10(train_or_test) Line 117: pp_mean = ds.get_per_pixel_mean() Can the validation set use all test data statistics like per_pixel_mean? Thanks in advance
Thanks for your great work, I have a small question related with calculating flops In paper Table 1 cifar10 DenseNet-40 (40% Pruned), model FLOPs is 3.81*10^8 cifar100 DenseNet-40 (40% Pruned),...
Hello, I want to reproduce the results on densenet-cifar10/cifar100, but got lower accuracy on tensorflow implementation. There is one question on model architecture, In the paper Implementation Details part: "Before...
Hi, Thanks for your great work In section 3 in paper "However, the identity function and the output of H_l are combined by summation, which may impede the information flow"...
Hello, I have one question when training denseNet: the validation loss get a sharp decrease than increase after learning rate changed from 0.1 to 0.01 I trained the densenet (depth_40_k_12)...
Hi I notice there is a "layer_per_stage" parameter in cifar100 pytorch implementation: https://github.com/Lyken17/SparseNet/blob/master/src/pytorch/denselink.py#L154 model1-BC (depth=100, k1=16, k2=32, k3=64) model2-BC (depth=100, k1=32,k2=64,k3=128) What is the exact parameter in these two models?...
In cifar10.py Line 105 transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)) The standard deviation on cifar10 is [0.24703223 0.24348513 0.26158784] and I search online How do you calculate the previous standard...
I have one question on cifar/mixnet.py Line 73- Line 79, the convolution operation order is BN --> conv(1,1) --> relu Usually in conv (resnet version 2), the order is bn-relu-conv...