practical_seq2seq
practical_seq2seq copied to clipboard
about padding
Hi, thanks to your project. In your code, i haven't seen any code for processing the padding in the data. Does that make sense? I see in many other work they will call a function that map the pad index to zero embedding and throw away the loss value for padding sequences.
@chenwangliangguo Ah. I missed it. We need to manipulate loss_weights, to set zero weights corresponding to zero padded positions. I'll work on it asap.
loss_weights = [ tf.ones_like(label, dtype=tf.float32) for label in self.labels ]
self.loss = tf.nn.seq2seq.sequence_loss(self.decode_outputs, self.labels, loss_weights, yvocab_size)
loss_weights = [ tf.ones_like(label, dtype=tf.float32) for label in self.labels ] self.loss = tf.nn.seq2seq.sequence_loss(self.decode_outputs, self.labels, loss_weights, yvocab_size)
what's the meaning? and "yvocab_size" is used for what?