wide-residual-networks icon indicating copy to clipboard operation
wide-residual-networks copied to clipboard

3.8% and 18.3% on CIFAR-10 and CIFAR-100

Results 24 wide-residual-networks issues
Sort by recently updated
recently updated
newest added

This is happening at `require 'image'` in `train.lua`. I'm not very familiar with lua but does it have to do with scoping of the imports? Can you also specify the...

**I use this code copy to model in fb.resnet.torch** https://github.com/szagoruyko/wide-residual-networks/blob/master/pretrained/wide-resnet.lua **Train script :** th main.lua -netType wideresnet -depth 18 -width 2 -batchSize 64 -nGPU 1 -nThreads 8 -data /home/ml/chakkrit/Fold1/ -nClasses...

My dataset contains a train and a val directory, which each contain sub-directories for every label. For example: ``` train// train// ...... train// train// ...... val// val// ...... val// .........

Dear Friends, I believe we are using the wrong mean std preprocessing to CIFAR-100 since the parameters in the code are for CIFAR-10 dataset. David

can you please explain why the fully connect layers weights are not initialize with MSRinit , how they are initialize ?

Hi, I want to know the exact test time information in which it is 1 epoch time and what the unit of time(ms or seconds) in training log.

Just took another look at https://arxiv.org/pdf/1605.07146v1.pdf > To summarize: > • widening consistently improves performance across residual networks of different > depth; > • increasing both depth and width helps...

Hi, I've tried your code, which runs very well. I only have a small naive problem: I'm confused with the u and v in the dataset. Why are there 2...

Convolving over whitened, or just about anyhow prepossessed data (if the original data is not also fed) seemed like a bad idea to me, so I tried using the standard...

Hi, My question seems a bit unrelated. But I am really curious, so sorry for interrupt, WRN uses a quite different weightDecay and learning rate schedule scheme from fb-resnet-torch used....