BinaryNet
BinaryNet copied to clipboard
Out of memory
Hello, I saw in the issues sections someone else had the same problem and I tried the referenced responses but I still get the same error. My GPU is GeForce 425m, I know it's an old GPU but is there anyway I can make this work? Can I use my CPU? Any suggestion is appreciated.
azadeh@azadeh:~/Downloads/BinaryNet$ th Main_BinaryNet_MNIST.lua -network BinaryNet_MNIST_Model sh: 1: mk1ir: not found 0 10033182 10033182 [program started on Sat May 13 16:05:46 2017] [command line arguments] stcWeights false LR 0.015625 modelsFolder ./Models/ batchSize 100 optimization adam preProcDir /home/azadeh/Downloads/BinaryNet/PreProcData/MNIST network ./Models/BinaryNet_MNIST_Model stcNeurons true constBatchSize false LRDecay 0 whiten false augment false load nGPU 1 dp_prepro false format rgb save /home/azadeh/Downloads/BinaryNet/Results/SatMay1316:05:392017 dataset MNIST normalization simple devid 1 visualize 1 type cuda threads 8 SBN true momentum 0 weightDecay 0 runningVal true epoch -1 [----------------------] ==> Network nn.Sequential { [input -> (1) -> (2) -> (3) -> (4) -> (5) -> (6) -> (7) -> (8) -> (9) -> (10) -> (11) -> (12) -> (13) -> (14) -> (15) -> output] (1): nn.View(-1, 784) (2): BinaryLinear(784 -> 2048) (3): BatchNormalizationShiftPow2 (4): nn.HardTanh (5): BinarizedNeurons (6): BinaryLinear(2048 -> 2048) (7): BatchNormalizationShiftPow2 (8): nn.HardTanh (9): BinarizedNeurons (10): BinaryLinear(2048 -> 2048) (11): BatchNormalizationShiftPow2 (12): nn.HardTanh (13): BinarizedNeurons (14): BinaryLinear(2048 -> 10) (15): nn.BatchNormalization (2D) (10) } ==>10033182 Parameters ==> Loss SqrtHingeEmbeddingCriterion
==> Starting Training
==> Starting Training
Epoch 1 THCudaCheck FAIL file=/tmp/luarocks_cutorch-scm-1-798/cutorch/lib/THC/generic/THCStorage.cu line=66 error=2 : out of memory /home/azadeh/torch/install/bin/luajit: cuda runtime error (2) : out of memory at /tmp/luarocks_cutorch-scm-1-798/cutorch/lib/THC/generic/THCStorage.cu:66 stack traceback: [C]: at 0x7f2c9df84f90 [C]: in function '__index' ./adaMax_binary_clip_shift.lua:71: in function 'adaMax_binary_clip_shift' Main_BinaryNet_MNIST.lua:233: in function 'Train' Main_BinaryNet_MNIST.lua:286: in main chunk [C]: in function 'dofile' ...adeh/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk [C]: at 0x00405d50
@Azadefamili change into a smaller batch_size may help