DeepLearnToolbox
DeepLearnToolbox copied to clipboard
In nntrain.m, the size of batch(variable batchsize) is supposed to have this property "mod(m,batchsize) == 0"
Hi, I am confused by some part of nntrain.m
In nntrain.m: numbatches = m / batchsize; assert(rem(numbatches, 1) == 0, 'numbatches must be a integer');
The size of batch(variable batchsize) is supposed to have this property "mod(m,batchsize) == 0". It seems not necessary. Why not set numbatches=ceil(m / batchsize) ?
you can probably do that if you want but then have to make sure that size of the last batch is truncated so you don't run out of data and cause an error by exceeding an array index On 11/1/14, 8:54 PM, Eta_li wrote:
Hi, I am confused by some part of nntrain.m
In nntrain.m: numbatches = m / batchsize; assert(rem(numbatches, 1) == 0, 'numbatches must be a integer');
The size of batch(variable batchsize) is supposed to have this property "mod(m,batchsize) == 0". It seems not necessary. Why not set numbatches=ceil(m / batchsize) ?
— Reply to this email directly or view it on GitHub https://github.com/rasmusbergpalm/DeepLearnToolbox/issues/122.