sru icon indicating copy to clipboard operation
sru copied to clipboard

SRU Module Doesn't appear to Use Residual/skip connections

Open NickShahML opened this issue 6 years ago • 5 comments

@taolei87 thanks for the repo again. Really good code.

One thing that I was analyzing was that it doesn't seem that the sru class doesn't have skip connections. Shouldn't it be:

        prevx = input
        lstc = []
        for i, rnn in enumerate(self.rnn_lst):
            h, c = rnn(prevx, c0[i])
            prevx += h #you have prevx = h

In this way the connections are residual which is useful for stacking multiple layers.

NickShahML avatar Sep 13 '17 12:09 NickShahML

hi @NickShahML

we use highway connections (Eq.7) instead of identity connections (residual). this is implemented in the CUDA code.

comparing highway with identity (or the version w/o any skip connections) is a TODO.

I would love to hear feedback from you as well :). thanks!

taolei87 avatar Sep 13 '17 15:09 taolei87

similar question #9

taolei87 avatar Sep 13 '17 15:09 taolei87

Gotcha @taolei87 . I'll need to modify the cuda code to do the residual adding as I suggested above. Right now I don't have the time, but I can't imagine it being too difficult. In my experience, residual connections always perform better than highway connections for RNNs and are much cheaper.

NickShahML avatar Sep 13 '17 15:09 NickShahML

@NickShahML I tried a bit residual in ICML language modeling task. The training loss decreases much slower compared to using highway. so I stopped given time & resource constraints.

Of course I might not be doing this very carefully or thoroughly. would love to hear your feedback. thanks!

taolei87 avatar Sep 13 '17 15:09 taolei87

@taolei87 Thanks for the update. Unfortunate that you're getting this result. I looked at your commits and I couldn't find where you implemented this change. DO you mind pushing the code so that I can check your implementation?

Basically, each subsequent layer should have an element-wise addition from the past layer's input.

Another avenue that I think could be extremely powerful is to do self attention at each layer. It would be best to do multiplicative attention with 8 heads as they do in this paper:

https://arxiv.org/abs/1706.03762

The idea is this:

output = SRUCell_Zero(input)
output += self_attention(output)/tf.sqrt(num_nuerons) #8 heads concatenated. These are then added element wise to the output
output += SRUCell_One(output)
# Repeat attention and cell depending on how many layers you want.

The idea here is we can attend to multiple parts of the input in parallel which is computational very fast. One thing we would need to specify is to whether we mask future inputs to attend to. If you're doing a language modeling task for example, the network can just memorize the future inputs with this attention mechanism. However, if you're doing a classification task, then masking is not needed at all since the sequence is already generated.

NickShahML avatar Sep 15 '17 12:09 NickShahML