2016_person_re-ID icon indicating copy to clipboard operation
2016_person_re-ID copied to clipboard

undefined function or variable 'dagnn.Square'

Open sde123 opened this issue 6 years ago • 6 comments

@layumi hello I am running the demo_heatmap.m.But I got a error

undefined function or variable 'dagnn.Square'

I have install matconvnet_beta23 in the matlabR2014a Could you please tell me what is wrong? Thankyou

sde123 avatar Sep 21 '17 14:09 sde123

Hi @sde123, I added some layers to matconvnet and I also included these layers in this repos. In fact, you do not need to install the original Matconvnet. I have included all necessary files in this repos. You can just download and run it. More information can be find in README.

layumi avatar Sep 22 '17 00:09 layumi

@layumi hello Thank you But when I run the gpu_compile.m I got an error

/home/dai/code/person_reidentification/5/Untitled Folder/2016_person_re-ID-master/matlab/src/bits/impl/bilinearsampler_gpu.cu(247): warning: variable "backward" was declared but never referenced
          detected during instantiation of "vl::ErrorCode vl::impl::bilinearsampler<vl::VLDT_GPU, type>::forward(vl::Context &, type *, const type *, const type *, size_t, size_t, size_t, size_t, size_t, size_t, size_t) [with type=float]" 
(364): here

/home/dai/code/person_reidentification/5/Untitled Folder/2016_person_re-ID-master/matlab/src/bits/impl/bilinearsampler_gpu.cu(247): warning: variable "backward" was declared but never referenced

Could please tell me what is wrong, I am in the ubuntu14.04,matlabR2014a

sde123 avatar Sep 22 '17 12:09 sde123

I haven't met such error. Would you like to provide the whole log?

layumi avatar Sep 22 '17 12:09 layumi

@layumi Thankyou When I run the train_id_net_res_2stream.m,because I only have one gpu,I add opts.gpus = 1 to the cnn_train_dag.m,but I got the error

train: epoch 01:   1/127:Error using  + 
Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If
the problem persists, reset the GPU by calling 'gpuDevice(1)'.

Error in dagnn.Sum/forward (line 15)
        outputs{1} = outputs{1} + inputs{k} ;

Error in dagnn.Layer/forwardAdvanced (line 85)
      outputs = obj.forward(inputs, {net.params(par).value}) ;

Error in dagnn.DagNN/eval (line 91)
  obj.layers(l).block.forwardAdvanced(obj.layers(l)) ;

Error in cnn_train_dag>processEpoch (line 223)
      net.eval(inputs, params.derOutputs, 'holdOn', s < params.numSubBatches) ;

Error in cnn_train_dag (line 91)
    [net, state] = processEpoch(net, state, params, 'train',opts) ;

Error in train_id_net_res_2stream (line 34)
[net,info] = cnn_train_dag(net, imdb, @getBatch,opts) ;

could please tell me how to solve it? Thankyou

sde123 avatar Sep 23 '17 01:09 sde123

Your GPU is out of memory, you can try reduce batch size.

Sent from my iPhone

On 23 Sep 2017, at 11:18 AM, sde123 [email protected] wrote:

@layumi Thankyou When I run the train_id_net_res_2stream.m,because I only have one gpu,I add opts.gpus = 1 to the cnn_train_dag.m,but I gor the error

train: epoch 01: 1/127:Error using + Out of memory on device. To view more detail about available memory on the GPU, use 'gpuDevice()'. If the problem persists, reset the GPU by calling 'gpuDevice(1)'.

Error in dagnn.Sum/forward (line 15) outputs{1} = outputs{1} + inputs{k} ;

Error in dagnn.Layer/forwardAdvanced (line 85) outputs = obj.forward(inputs, {net.params(par).value}) ;

Error in dagnn.DagNN/eval (line 91) obj.layers(l).block.forwardAdvanced(obj.layers(l)) ;

Error in cnn_train_dag>processEpoch (line 223) net.eval(inputs, params.derOutputs, 'holdOn', s < params.numSubBatches) ;

Error in cnn_train_dag (line 91) [net, state] = processEpoch(net, state, params, 'train',opts) ;

Error in train_id_net_res_2stream (line 34) [net,info] = cnn_train_dag(net, imdb, @getBatch,opts) ; could please tell me how to solve it? Thankyou

― You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub, or mute the thread.

dinggd avatar Sep 23 '17 02:09 dinggd

Thank you @gddingcs net.conserveMemory = true; also helps. (I have turn it on in the code.) So @sde123 you can try to use small batchsize first.

layumi avatar Sep 23 '17 02:09 layumi