marvin521
marvin521
I am curious about the number of training samples. The authors said in the paper: We perturb the training set by randomly rotating and translating the target face in 2D...
It has taken me several hours while one epoch of training has not finished yet.
 As shown in the figure, the generated foreground size is usually large. How to adjust the size of the generated foreground?
I met a problem when I call SpatialPyramidPooling. `/usr/local/share/lua/5.1/inn/SpatialPyramidPooling.lua:12: attempt to call field 'Contiguous' (a nil value).` part of my codes are: `model:add(nn.SpatialConvolutionMM(nstates[1], nstates[2], filtsize, filtsize)) model:add(nn.ReLU()) model:add(inn.SpatialPyramidPooling({{6,6},{4,4},{2,2}}))` How can...
能加一下吗?