Emmanuel Benazera
Emmanuel Benazera
Caffe documentation and examples are a mess, but the comment that explains the required inputs to the RNN and LSTM recurrent layers is here: https://github.com/BVLC/caffe/pull/2033#issue-59849829 As expected this requires modifying...
`\delta` should be put into the `Datum` before storage as LMDB, in `CaffeInputConn.h`, much like the other decompositions. I've put it all on paper, it should not be long to...
hi @kyrs ! yes, `to_datum` converts one hot word or char vector sequences to Caffe Datum. You can write two lmdb files, they will be synced if you write the...
> hi @beniz if you look into the files for generating \delta https://github.com/BVLC/caffe/pull/2033/files#diff-3a0266c4b6244affd2fd7505a2452f5fR193 you can easily see that all the padded words have a value 0. But for our use...
You can change the format, but you could also use the characters instead of words to play with the LSTM.
I'm not sure why you are calling `to_datum` before filling up the `Datum`. Actually I believe the code should be executed within `to_datum`, though I may have missed something.
Yes you can change `to_datum`, otherwise you may get weird results by letting the datums being filled up before your code runs. Slicing is not difficult: just append the `deltas`...
You can slice in any dimension you want, even multiple times.
The current padding for character does preserve order. The one for words does not since it is a bag of word model. But you could build one that has ordered...
hi @kyrs, best is to PR once you know that it works :) Have you tried training on an example ? The IMDb dataset would be a good one to...