metaformer
metaformer copied to clipboard
metaformer has no positional encoding?
I notice that Metaformer has no positional encoding(PE) either in the attention layers or at the model input, does this affect the performance? Is the positional encoding not necessary? What if metaformer is equipped with 2D sin-cos/learned PE?