QANet-pytorch
QANet-pytorch copied to clipboard
why the dim of character vec set to 64?
hey,hengruo! I've got a question to consult you~ char_dim = 64 #Embedding dimension for char In the paper of QANet, the auther stated ' Each character is represented as a trainable vector of dimension p2 = 200, meaning each word can be viewed as the concatenation of the embedding vectors for each of its characters.' But, I found this in the config.py, char_dim = 64 ? And, when i run this repo,both the values of F1 score and EM were very low(F1 only close to 10). What's more: d_model = 96 #Dimension of connectors of each layer is there should be 128? Do those settings of these values affect the performance of model? Thanks a lot, I'd appreciate that if you have time.
hello, jewelChen! I also encountered the same problem as you. May I ask if you have solved it now