tsai icon indicating copy to clipboard operation
tsai copied to clipboard

Exchange last layer in InceptionTime Model after pretraining task.

Open benehiebl opened this issue 1 year ago • 0 comments

After a pretraining task, it is not possible to access and change the last layer of the InceptionTimePlus model via model.head[-1]. This is probably due to a double call of nn.Sequential in the model class (?).

The model:

  class InceptionTimePlus(nn.Sequential):
      def __init__(self, c_in, c_out, seq_len=None, nf=32, nb_filters=None,
                   flatten=False, concat_pool=False, fc_dropout=0., bn=False, y_range=None, custom_head=None, **kwargs):
          
          if nb_filters is not None: nf = nb_filters
          else: nf = ifnone(nf, nb_filters) # for compatibility
          backbone = InceptionBlockPlus(c_in, nf, **kwargs)
          
          #head
          self.head_nf = nf * 4
          self.c_out = c_out
          self.seq_len = seq_len
          if custom_head is not None: 
              if isinstance(custom_head, nn.Module): head = custom_head
              else: head = custom_head(self.head_nf, c_out, seq_len)
          else: head = self.create_head(self.head_nf, c_out, seq_len, flatten=flatten, concat_pool=concat_pool, 
                                        fc_dropout=fc_dropout, bn=bn, y_range=y_range)
              
          layers = OrderedDict([('backbone', nn.Sequential(backbone)), ('head', nn.Sequential(head))])
          super().__init__(layers)
          
      def create_head(self, nf, c_out, seq_len, flatten=False, concat_pool=False, fc_dropout=0., bn=False, y_range=None):
          if flatten: 
              nf *= seq_len
              layers = [Flatten()]
          else: 
              if concat_pool: nf *= 2
              layers = [GACP1d(1) if concat_pool else GAP1d(1)]
          layers += [LinBnDrop(nf, c_out, bn=bn, p=fc_dropout)]
          if y_range: layers += [SigmoidRange(*y_range)]
          return nn.Sequential(*layers)

Only if i change the return of the create_head function to "return layers" and subsequently "layers = OrderedDict([('backbone', nn.Sequential(backbone)), ('head', nn.Sequential(*head))])", it is possible to e.g. load a pretrained InceptionTimePlus model and e.g. change the last Linear in the LinBnDrop Layer like: self.model.head[-1][1] = torch.nn.Linear(last_layer.in_features, my_out_channels)

benehiebl avatar Mar 28 '24 16:03 benehiebl