QANet
QANet copied to clipboard
TODOs
This is an umbrella issue where we can collectively tackled some problems and improve general open source reading comprehension quality.
Goal The network is already there. We just need to add more features on top of the current model.
- [x] Implement full features stated in the original paper
- [ ] Achieve EM/F1 performance stated in the original paper with a single model settings
Model
- [x] Increase the hidden units to 128. #15 reported performance increase when the hidden units increased from 96 to 128
- [ ] Increase the number of heads to 8
- [ ] Add dropouts in better locations to maximize regularization
- [ ] Train "unknown" word embedding
Data
- [ ] Implement paraphrasing by back-translation to increase the data size
Contribution to any of these issues is welcome and please comment on this issue and let us know if you want to work on these problems.
As of f0c79cc93dc1dfdad2bc8abb712a53d078814a56, I have changed the location of dropouts to "after" layer norm from "before" layer norm. It doesn't make sense to drop input channels to layer norm as they normalize across channel dimensions, this will cause distribution mismatch during inference time and training time. We shall see how this improves the model.
To overcome your GPU memory constraints, what about just decreasing batch size?
On a 1080 Ti (11GB), I'm able to run 128 hidden units, 8 attention heads, 300 glove_dim, 300 char_dim with a batch size of 12. At least 16 and above, CUDA is out of memory. Accuracy seems comparable so far.
You have a valid point, and I would like to know how your experiment goes. I would also suggest trying group norm instead of layer norm as they report better performance with lower batch sizes.
Good suggestion, Min. Since the paper compares against batch norm, have you found that layer norm generally outperforms batch norm lately? One could try batch norm also for comparison. Interestingly the 'break-even' point is about batch size 12 between batch norm and group norm for those paper's conditions. Layer norm is supposedly more robust to small mini batches compared to batch norm.
Also the conditions from the above comment run fine on a 1070 gpu.
Do you have a sense if model parallelization across multiple gpus is worth it for this type of model?
Hi @mikalyoung , I haven't tried parallelisation across multiple GPUs so I wouldn't know what the best way to go about it is. I heard that data parallelism is easier to get working than model parallelisation. It seems that from #15 using bigger hidden size and bigger number of heads in attention improves the performance, so I would try fitting the bigger model with smaller batches into multiple GPUs.
Right now what is the status reproducing the paper's result?