byteNet-tensorflow icon indicating copy to clipboard operation
byteNet-tensorflow copied to clipboard

Consistency with the original paper?

Open amirj opened this issue 8 years ago • 9 comments

Could you reproduce the result of language modeling in the original paper?

amirj avatar Jan 30 '17 16:01 amirj

or translation experiments?

amirj avatar Feb 01 '17 17:02 amirj

No, I could not. Not sure why. Check out this alternate implementation https://github.com/buriburisuri/ByteNet , though from the results I don't think it is able to reproduce the paper results either.

paarthneekhara avatar Feb 02 '17 18:02 paarthneekhara

Thanks for the great work! I compared the current implementation with the paper and noticed a few discrepancies:

  • Section 3.3 only applies to the translation model, but the implemented prediction model has an embedding of size n x 2d (w_source_embedding). The paper does not state the correct embedding size, but intuitively I would set it to n x c with c being the number of characters.

  • The authors write: "Each layer is wrapped in a residual block that contains additional convolutional layers with filters of size 1 × 1 (He et al., 2016)." decode_layer() does not seem to initialise any filters of size 1 x 1.

  • The authors use Multiplicative Units instead of ReLUs for the prediction model (section 3.6). See section 4.1 of this paper for an explanation of Multiplicative Units.

  • Layer Normalisation should be used before the activation function (section 3.6).

tindzk avatar Apr 02 '17 10:04 tindzk

Can you quantify what sections of the code you feel you are confident align with the paper itself and which parts you are uncertain about?

speakerjohnash avatar Apr 08 '17 21:04 speakerjohnash

@tindzk

  1. Yes, the author does not mention the embedding size for prediction model. You can try setting it to c, but (intuitively) I don't think that will significantly improve the performance.

  2. The deocde layer does infact pack the dilated convolutions by 1X1 convolutions defined. See conv1 and conv2). The default filter width is 1 in ops.conv1d. `def decode_layer(self, input_, dilation, layer_no): options = self.options relu1 = tf.nn.relu(input_, name = 'dec_relu1_layer{}'.format(layer_no)) conv1 = ops.conv1d(relu1, options['residual_channels'], name = 'dec_conv1d_1_layer{}'.format(layer_no))

     relu2 = tf.nn.relu(conv1, name = 'enc_relu2_layer{}'.format(layer_no))
     dilated_conv = ops.dilated_conv1d(relu2, options['residual_channels'], 
     	dilation, options['decoder_filter_width'],
     	causal = True, 
     	name = "dec_dilated_conv_laye{}".format(layer_no)
     	)
    
     relu3 = tf.nn.relu(dilated_conv, name = 'dec_relu3_layer{}'.format(layer_no))
     conv2 = ops.conv1d(relu3, 2 * options['residual_channels'], name = 'dec_conv1d_2_layer{}'.format(layer_no))
    
     return input_ + conv2`
    
  3. Yes, the multiplicative units have not been implemented for the prediction model. You can try doing that to check if that improves the performance.

  4. Layer normalisation seems to be an update in the paper. Earlier this was sub-batch normalisation. Check this https://arxiv.org/pdf/1610.10099v1.pdf . I'll implement this soon.

paarthneekhara avatar Apr 09 '17 15:04 paarthneekhara

@msevrens I think except layer normalisation, everything is in line with paper. I cannot guarantee a bug-free implementation since the results are not aligned with the paper. I'll be working on this in the coming week.

paarthneekhara avatar Apr 09 '17 16:04 paarthneekhara

@paarthneekhara

If you replace the output embedding targets with word embeddings, your code actually translates semantically very well but syntactically poorly. It also over estimates common tokens in translation.

I think you can actually see this pattern when the output targets are character embeddings too. In that case, uncommon collections of characters appear to be easily predicted even though the sequences are long and complex but interspersed between all of the correct learned sequences are random injections of common tokens.

I don't know how that information can help others in recreating the paper but there it is.

speakerjohnash avatar Apr 10 '17 17:04 speakerjohnash

@paarthneekhara

Why is the target embedding going into the decoder? How does that work at evaluation time with no target to feed?

speakerjohnash avatar Apr 17 '17 13:04 speakerjohnash

Just an update in case you are still following this. I rewrote almost the complete model. I noticed there was a bug in the encoder for the translation model earlier (in the dilated conv1d implementation). I have updated the ops and training is in progress. It looks good, but it will take time to train since I am not using a powerful GPU.

Also, the model is now neater and faster. It accepts variable length sentences as opposed to the last implementation in which the sentence length had to specified while initializing the model.

paarthneekhara avatar Aug 23 '17 14:08 paarthneekhara