Language-Modeling-GatedCNN
Language-Modeling-GatedCNN copied to clipboard
is the dimension of convolution output same with the embeddings size ??
I noted that
h, res_input = embed, embed
and
fanin_depth = h.get_shape()[-1]
is the dimension of convolution output same with the embeddings size ?? why ?
Since there is no pooling, the height and width of the output layer remains the same. The depth is also kept constant for each layer, but can be modified to be variable layer-wise. Contributions welcome!
but based on the Figure1 in paper, the conv output size (3 ) is different with embeddings size(5). is the "n" conv filter numbers, the "m" embeddings size ? or m==n ?
That's an interesting observation. However, it's mentioned in the paper that X (input to any hidden layer h) has the dimension Nxm, and this input could be either word embeddings or the outputs of previous layers. So I think a clarification would be needed from the authors regarding this.