libdnn
libdnn copied to clipboard
It seems like the label is being fed into the network while training
I was wondering why you added 1 to the AffineTransform layer geometry in https://github.com/botonchou/libdnn/blob/master/src/nnet.cpp#L233
After some nosing around I found you feed the label into the network as being one of its' inputs! (I printed the data at https://github.com/botonchou/libdnn/blob/master/src/nn-train.cpp#L164). I don't think this is desired behaviour. Can you please explain this?
In rand(out + 1, in + 1)
, the first +1
is for biases in this layer, and second +1
is reserved for +1 in the next layer. ( [x, 1] )
No, label won't be fed into the network. I'm pretty sure about that. Maybe you can tell me the command line arguments so that I can reproduce your situation.
I printed the data being fed into the network, it included my label. It is relatively easy to see when testing the XOR example I showed in issue #18.
I just printed. It didn't include your label. The last row are always 1.
About the problem you encountered, I think it's because I count the number of \n
to see how many lines in the file. Since the data you provided (xor.train.dat
and xor.test.dat
) have no trailing \n
at line 4, it might cause undefined behaviors.