Andrej
Andrej
Hi @cwgreene I appreciate the PR but I just recently went through a scarring experience of merging someone else's PR and it broke arxiv-sanity and required me to revert breaking...
I am hesitant to change my previous default functionality. If you made this into an independent script, maybe `pdf_to_text_tet.py` or something, then I'd be happy to merge it as an...
Have you tried using the command `th` rather than `luajit`? This error almost certainly means that you somehow haven't followed the instructions for installing Torch in full: http://torch.ch/docs/getting-started.html#_
I assume this code is backwards compatible to previous datasets?
This wouldn't be too difficult since most of the code doesn't know anything about characters, only about indexes. You'd have to modify the loader class to create word dictionaries instead,...
This makes sense, it might be necessary to add an Embedding layer just before the RNN to embed the words into a smaller dimensional space. Since the LSTM operates linearly...
Thanks! Curious - have you tested if this works better?
Can I ask what the motivation is for removing biases from that linear layer? (haven't read the BN LSTM papers yet). Is this just to avoid redundancy? Also, is it...
not right now I don't think so, good idea! What kind of use case would you have in mind with it?
It seems your dataset is very small. Can you try making small batch size? E.g. batch_size 10 or maybe 20, and also maybe seq_length 50 maybe?