Raphael Tang
Raphael Tang
I believe so.
URA task would be to upload the pretrained weights and work the pipeline into the codebase.
Thanks for the PR. Have you run any benchmarks?
Okay, it's probably a good idea to note in some readme that the correct implementation of `char_cnn` produces an F1 of 0 on Reuters, so people don't waste their time...
I'll take a look at this issue soon. IIRC the `print_stats` function loads a lot of the files if `compute_length` is True.
Do you mean this file? https://github.com/castorini/howl/blob/master/howl/model/inference.py
Seems like I spoke too early. Results fluctuate from high 85s to 87s.
The old implementation was about 0.5 points off for Pearson's r on the test set -- now it's closer to 2. The biggest changes from the old impl to now...
#99 #101 Maybe we can have all the shared modules in `/common/` and the model-specific stuff in the current directories. For example, the user runs something like `python -m conv_rnn`...
Hi, Thanks for your interest, I've confirmed this issue. My guess is that the amount of padding depends on the batch size due to varying sentence lengths, and the resulting...