practical_seq2seq icon indicating copy to clipboard operation
practical_seq2seq copied to clipboard

about padding

Open wangliangguo opened this issue 8 years ago • 2 comments

Hi, thanks to your project. In your code, i haven't seen any code for processing the padding in the data. Does that make sense? I see in many other work they will call a function that map the pad index to zero embedding and throw away the loss value for padding sequences.

wangliangguo avatar Jan 13 '17 01:01 wangliangguo

@chenwangliangguo Ah. I missed it. We need to manipulate loss_weights, to set zero weights corresponding to zero padded positions. I'll work on it asap.

loss_weights = [ tf.ones_like(label, dtype=tf.float32) for label in self.labels ]
self.loss = tf.nn.seq2seq.sequence_loss(self.decode_outputs, self.labels, loss_weights, yvocab_size)

suriyadeepan avatar Jan 20 '17 01:01 suriyadeepan

loss_weights = [ tf.ones_like(label, dtype=tf.float32) for label in self.labels ] self.loss = tf.nn.seq2seq.sequence_loss(self.decode_outputs, self.labels, loss_weights, yvocab_size)

what's the meaning? and "yvocab_size" is used for what?

XianhuiLin avatar Jan 24 '17 15:01 XianhuiLin