average attention mask
Describe the requested improvement In their paper on L-TAE (https://arxiv.org/pdf/2007.00586.pdf), authors describe a method called "attention mask" which aim at situating the discriminant source of data for each class.
Associated sits API function Is there a way to implement this approch in sits?
Dear @vparmain
The main difference between the TAE and LightTAE methods proposed by Garnot et al. is the use of the average attention mask to reduce the number of parameters in the model. Both TAE and LightTAE are available on sits using the sits_tae() and sits_lighttae() functions.
To implement sits_lighttae() we used the papers by Vivien Garnot and reference code at
https://github.com/VSainteuf/lightweight-temporal-attention-pytorch
We also used the code made available by Maja Schneider at
https://github.com/maja601/RC2020-psetae
Please let us know if this is what you requested.