intrinsic-dimension icon indicating copy to clipboard operation
intrinsic-dimension copied to clipboard

Training on ImageNet yielding whether exploding or constant loss

Open LoryPack opened this issue 7 years ago • 1 comments

I am trying to train the squeezenet or alexnet architecture on a part of ImageNet dataset (in particular, using just a small number of classes). I tried with many choices of learning rate, and all the possible optimizers; in all cases, even adding regularization, the network does not seem to be able to learn. With some combinations, the loss in diverging, while with others, it is remaining roughly constant. I am training on a machine with 4 GPUs.

Do you know possible reasons of this problem?

LoryPack avatar May 17 '18 15:05 LoryPack

Hi,

First of all, I realized the build_squeezenet_fastfood function was sadly left out when we port the repo; it is now added.

The observed loss is going down with the following command, on a machine with 4 GPUs: (Note the unusually large minibatch size of 900 due to the data parallel scheme used by horovod)

mpirun -np 4 ./train_distrbuted.py /data_local/imagenet/train.h5 /data_local/imagenet/val.h5 --arch squeeze --fastfoodproj --mb 900 --vsize 800000 -E 200 -L 0.001

Screen printout of first 512 iterations:

time: 14.091972. after training for 0 epochs: 0 (worker 1) val: l: 11.1579, l_xe: 11.1579, acc: 0.0011 (0.256s/i) time: 14.093954. after training for 0 epochs: 0 (worker 3) val: l: 11.1966, l_xe: 11.1966, acc: 0.0011 (0.256s/i) time: 14.370462. after training for 0 epochs: 0 (worker 0) val: l: 11.1552, l_xe: 11.1552, acc: 0.0013 (0.261s/i) time: 14.577921. after training for 0 epochs: 0 (worker 2) val: l: 11.1439, l_xe: 11.1439, acc: 0.0015 (0.265s/i) 0 (worker 2) train: l: 11.0991, l_xe: 11.0991, acc: 0.0044 (2.74s/i) [4/713] 0 (worker 0) train: l: 11.4162, l_xe: 11.4162, acc: 0.0000 (2.94s/i) 0 (worker 3) train: l: 10.8558, l_xe: 10.8558, acc: 0.0044 (3.22s/i) 0 (worker 1) train: l: 11.3371, l_xe: 11.3371, acc: 0.0044 (3.22s/i) 1 (worker 0) train: l: 10.6719, l_xe: 10.6719, acc: 0.0000 (0.532s/i) 1 (worker 2) train: l: 10.6809, l_xe: 10.6809, acc: 0.0022 (0.532s/i) 1 (worker 3) train: l: 10.6065, l_xe: 10.6065, acc: 0.0022 (0.531s/i) 1 (worker 1) train: l: 10.8011, l_xe: 10.8011, acc: 0.0022 (0.531s/i) 2 (worker 0) train: l: 9.9489, l_xe: 9.9489, acc: 0.0000 (0.589s/i) 2 (worker 2) train: l: 9.9798, l_xe: 9.9798, acc: 0.0015 (0.589s/i) 2 (worker 3) train: l: 9.9178, l_xe: 9.9178, acc: 0.0015 (0.59s/i) 2 (worker 1) train: l: 9.9542, l_xe: 9.9542, acc: 0.0030 (0.59s/i) 4 (worker 0) train: l: 9.0065, l_xe: 9.0065, acc: 0.0000 (0.691s/i) 4 (worker 3) train: l: 8.9829, l_xe: 8.9829, acc: 0.0009 (0.692s/i) 4 (worker 1) train: l: 9.0255, l_xe: 9.0255, acc: 0.0018 (0.692s/i) 4 (worker 2) train: l: 9.0202, l_xe: 9.0202, acc: 0.0009 (0.692s/i) 8 (worker 2) train: l: 8.2770, l_xe: 8.2770, acc: 0.0005 (0.776s/i) 8 (worker 1) train: l: 8.2749, l_xe: 8.2749, acc: 0.0010 (0.776s/i) 8 (worker 3) train: l: 8.2479, l_xe: 8.2479, acc: 0.0005 (0.777s/i) 8 (worker 0) train: l: 8.2313, l_xe: 8.2313, acc: 0.0000 (0.778s/i) 16 (worker 3) train: l: 7.7050, l_xe: 7.7050, acc: 0.0003 (0.696s/i) 16 (worker 1) train: l: 7.7404, l_xe: 7.7404, acc: 0.0005 (0.695s/i) 16 (worker 2) train: l: 7.7294, l_xe: 7.7294, acc: 0.0005 (0.693s/i) 16 (worker 0) train: l: 7.7000, l_xe: 7.7000, acc: 0.0005 (0.696s/i) 32 (worker 1) train: l: 7.3546, l_xe: 7.3546, acc: 0.0009 (0.578s/i) 32 (worker 3) train: l: 7.3356, l_xe: 7.3356, acc: 0.0011 (0.578s/i) 32 (worker 0) train: l: 7.3341, l_xe: 7.3341, acc: 0.0007 (0.579s/i) 32 (worker 2) train: l: 7.3474, l_xe: 7.3474, acc: 0.0012 (0.579s/i) 64 (worker 0) train: l: 7.1258, l_xe: 7.1258, acc: 0.0011 (0.776s/i) 64 (worker 2) train: l: 7.1322, l_xe: 7.1322, acc: 0.0012 (0.776s/i) 64 (worker 3) train: l: 7.1273, l_xe: 7.1273, acc: 0.0011 (0.777s/i) 64 (worker 1) train: l: 7.1361, l_xe: 7.1361, acc: 0.0011 (0.777s/i) 100: Average iteration time over last 100 train iters: 0.716s 100: Average iteration time over last 100 train iters: 0.721s 100: Average iteration time over last 100 train iters: 0.721s 100: Average iteration time over last 100 train iters: 0.719s 128 (worker 1) train: l: 7.0186, l_xe: 7.0186, acc: 0.0019 (0.661s/i) 128 (worker 3) train: l: 7.0150, l_xe: 7.0150, acc: 0.0014 (0.662s/i) 128 (worker 2) train: l: 7.0163, l_xe: 7.0163, acc: 0.0016 (0.662s/i) 128 (worker 0) train: l: 7.0143, l_xe: 7.0143, acc: 0.0014 (0.662s/i) 200: Average iteration time over last 100 train iters: 0.682s 200: Average iteration time over last 100 train iters: 0.682s 200: Average iteration time over last 100 train iters: 0.682s 200: Average iteration time over last 100 train iters: 0.682s 256 (worker 0) train: l: 6.9505, l_xe: 6.9505, acc: 0.0018 (0.776s/i) 256 (worker 3) train: l: 6.9526, l_xe: 6.9526, acc: 0.0019 (0.776s/i) 256 (worker 2) train: l: 6.9527, l_xe: 6.9527, acc: 0.0019 (0.776s/i) 256 (worker 1) train: l: 6.9546, l_xe: 6.9546, acc: 0.0022 (0.777s/i) 300: Average iteration time over last 100 train iters: 0.701s 300: Average iteration time over last 100 train iters: 0.701s 300: Average iteration time over last 100 train iters: 0.701s 300: Average iteration time over last 100 train iters: 0.701s 400: Average iteration time over last 100 train iters: 0.669s 400: Average iteration time over last 100 train iters: 0.669s 400: Average iteration time over last 100 train iters: 0.669s 400: Average iteration time over last 100 train iters: 0.669s 500: Average iteration time over last 100 train iters: 0.688s 500: Average iteration time over last 100 train iters: 0.688s 500: Average iteration time over last 100 train iters: 0.688s 500: Average iteration time over last 100 train iters: 0.688s 512 (worker 2) train: l: 6.9017, l_xe: 6.9017, acc: 0.0027 (0.611s/i) 512 (worker 0) train: l: 6.9003, l_xe: 6.9003, acc: 0.0023 (0.613s/i) 512 (worker 1) train: l: 6.9045, l_xe: 6.9045, acc: 0.0027 (0.612s/i) 512 (worker 3) train: l: 6.9027, l_xe: 6.9027, acc: 0.0025 (0.611s/i)

mimosavvy avatar May 23 '18 04:05 mimosavvy