caml-mimic
caml-mimic copied to clipboard
padding, softmax, embeddings
Hi,
I have two questions regarding the CAML implementation:
- All the texts in a batch are padded, but the input to the softmax function is not masked. Hence, this implementation also assigns positives attentions to padding tokens, right? Do I miss something here?
- The embedding vector that belongs to the padding tokens does not seem to be fixed to the zero vector. If not, then where is that constraint implemented? (I guess it wouldn't make a difference if 1. was handled differently, i.e. if the attentions for padding vectors would be fixed to 0).
Many thanks!
Should be fixed from the above PR, although in my experience this doesn't really change the result.
No, the PR doesn't fix everything. In my experience, fixing the embedding of the padding tokens does not change much, but masking the softmax input does.
I see what you mean. I'll look into it.
I have the same question here about taking softmax to compute attention weights. I rewrote my code to explicitly truncate each sample in the batch (quite inefficient). Some preliminary result shows about 3-4% drop for simple case of base CNN with 50 common labels. Would anyone be able to chime in on this issue? Thanks.
This line here still does not use any masking https://github.com/jamesmullenbach/caml-mimic/blob/master/learn/models.py#L184 to compute weights.