trax
trax copied to clipboard
Does the Reformer have more parameters than the baseline?
Regarding Reformer: paper | code
From paper:
.. show that it performs the same as the normal Transformer when using the same number of parameters; we achieve this by having both x1 and x2 have size d_model.
I see how the parameters of Attention and MLP does not increase. But what about (1) the embedding layer and (2) the final projection layer?
Question 0. Why does the parameters of the initial embedding layer not increase if we double d_model?.