tsai
tsai copied to clipboard
There is no way of passing padding_mode to nn.Conv1d through InceptionTime/InceptionTimePlus
I've been playing around with the InceptionTime and InceptionModulePlus models and gathering the features captured by the CNN layers w.r.t. different filters. In the meanwhile, i would also like to try out different padding strategies of the Conv1d modules and see their difference.
But it seems that there is no way of passing the padding_mode parameter to nn.Conv1d through the construction methods of both models. I then notice that there is a padding parameter but it doesn't accept either 'zero' or 'zeros'. However, simply passing the padding parameter with "valid" or the number 0 causes a dimension misalignment error (with seq_len=1000 and ks=40) like:
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 962 but got size 982 for tensor number 1 in the list.
Is there a reason why this shouldn't be implemented or it is an up-coming feature?
Hi @JoshuaGhost, The reason why this is not supported by tsai is that the original InceptionTime model (https://github.com/hfawaz/InceptionTime/blob/470ce144c1ba43b421e72e1d216105db272e513f/classifiers/inception.py#L39) doesn't use it. It just uses padding="same".