hls4ml icon indicating copy to clipboard operation
hls4ml copied to clipboard

Length of Pytorch input dimensions - return Nonetype

Open kevin-mahon opened this issue 2 years ago • 4 comments

Hello,

I'm pretty new to github and I've only started using hls4ml today but when converting my model:

self.module = nn.Sequential( nn.Conv2d(in_channels=in_channels, out_channels=32, kernel_size=5, stride=2, padding=0), # 90x160 --> 43x52 nn.ReLU(), nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=2, padding=0), # --> 4x6 nn.ReLU(), nn.Conv2d(in_channels=64, out_channels=96, kernel_size=3, stride=2, padding=0), # --> 4x6 nn.ReLU(), nn.Conv2d(in_channels=96, out_channels=128, kernel_size=3, stride=2, padding=0), # --> 4x6 nn.ReLU(), nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=2, padding=0), # --> 4x6 nn.ReLU(), nn.Flatten(), nn.Linear(768, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 16), nn.ReLU(), nn.Linear(16, 2), )

Using: hlsmodel = hls4ml.converters.convert_from_pytorch_model(model, (1, 90,160), hls_config=config) I was getting a Nonetype non-iterable exception. I figured it was caused by this in hls4ml/converters/utils.py(line14):

    elif data_format.lower() == 'channels_first':
          if len(input_shape) == 2: # 1D, (n_filt, n_in)
              return (input_shape[1], input_shape[0])
          elif len(input_shape) == 3: # 2D, (n_filt, in_height, in_width)
              return (input_shape[1], input_shape[2], input_shape[0])

I just added the lines:

   elif len(input_shape) == 4: #2d w/ batch size, (n_batch, n_filt, in_height, in_width)
        return (input_shape[2], input_shape[3], input_shape[1])

And it worked, returning the hls model:

Input Shape: [[1, 90, 160]] [1, 90, 160] Layer name: module.0, layer type: Conv2D, input shape: [[1, 90, 160]] Layer name: module.1, layer type: Activation, input shape: [[1, 32, 43, 78]] [1, 32, 43, 78] Layer name: module.2, layer type: Conv2D, input shape: [[1, 32, 43, 78]] Layer name: module.3, layer type: Activation, input shape: [[1, 64, 21, 38]] [1, 64, 21, 38] Layer name: module.4, layer type: Conv2D, input shape: [[1, 64, 21, 38]] Layer name: module.5, layer type: Activation, input shape: [[1, 96, 10, 18]] [1, 96, 10, 18] Layer name: module.6, layer type: Conv2D, input shape: [[1, 96, 10, 18]] Layer name: module.7, layer type: Activation, input shape: [[1, 128, 4, 8]] [1, 128, 4, 8] Layer name: module.8, layer type: Conv2D, input shape: [[1, 128, 4, 8]] Layer name: module.9, layer type: Activation, input shape: [[1, 256, 1, 3]] Layer name: module.11, layer type: Dense, input shape: [[1, 256, 1, 3]] Layer name: module.12, layer type: Activation, input shape: [[1, 128]] Layer name: module.13, layer type: Dense, input shape: [[1, 128]] Layer name: module.14, layer type: Activation, input shape: [[1, 64]] Layer name: module.15, layer type: Dense, input shape: [[1, 64]] Layer name: module.16, layer type: Activation, input shape: [[1, 16]] Layer name: module.17, layer type: Dense, input shape: [[1, 16]] Creating HLS model <hls4ml.model.hls_model.HLSModel object at 0x7f58663b0d30>

Since the other length didn't take into account the size of the batch where pytorch inputs are [Batch Size, Channels, Height, Width] Not sure if it was a bug or an intended feature that I used wrong though so I would appreciate some thoughts on this, again I'm pretty new to GitHub/SW development.

kevin-mahon avatar Apr 14 '22 19:04 kevin-mahon

Hi,

The batch size is often stripped. The issue may be that for pytorch (as for ONNX) input_shape[0] is generally not None but 1. I think line 4 needs to be modified to take into account both cases.

Jovan

On Apr 14, 2022, at 2:34 PM, kevin-mahon @.***> wrote:

Hello,

I'm pretty new to github and I've only started using hls4ml today but when converting my model:

self.module = nn.Sequential( nn.Conv2d(in_channels=in_channels, out_channels=32, kernel_size=5, stride=2, padding=0), # 90x160 --> 43x52 nn.ReLU(), nn.Conv2d(in_channels=32, out_channels=64, kernel_size=3, stride=2, padding=0), # --> 4x6 nn.ReLU(), nn.Conv2d(in_channels=64, out_channels=96, kernel_size=3, stride=2, padding=0), # --> 4x6 nn.ReLU(), nn.Conv2d(in_channels=96, out_channels=128, kernel_size=3, stride=2, padding=0), # --> 4x6 nn.ReLU(), nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=2, padding=0), # --> 4x6 nn.ReLU(), nn.Flatten(), nn.Linear(768, 128), nn.ReLU(), nn.Linear(128, 64), nn.ReLU(), nn.Linear(64, 16), nn.ReLU(), nn.Linear(16, 2), )

Using: hlsmodel = hls4ml.converters.convert_from_pytorch_model(model, (1, 90,160), hls_config=config) I was getting a Nonetype non-iterable exception. I figured it was caused by this in hls4ml/converters/utils.py(line14):

elif data_format.lower() == 'channels_first':
      if len(input_shape) == 2: # 1D, (n_filt, n_in)
          return (input_shape[1], input_shape[0])
      elif len(input_shape) == 3: # 2D, (n_filt, in_height, in_width)
          return (input_shape[1], input_shape[2], input_shape[0])

I just added the lines:

elif len(input_shape) == 4: #2d w/ batch size, (n_batch, n_filt, in_height, in_width) return (input_shape[2], input_shape[3], input_shape[1]) And it worked, returning the hls model:

Input Shape: [[1, 90, 160]] [1, 90, 160] Layer name: module.0, layer type: Conv2D, input shape: [[1, 90, 160]] Layer name: module.1, layer type: Activation, input shape: [[1, 32, 43, 78]] [1, 32, 43, 78] Layer name: module.2, layer type: Conv2D, input shape: [[1, 32, 43, 78]] Layer name: module.3, layer type: Activation, input shape: [[1, 64, 21, 38]] [1, 64, 21, 38] Layer name: module.4, layer type: Conv2D, input shape: [[1, 64, 21, 38]] Layer name: module.5, layer type: Activation, input shape: [[1, 96, 10, 18]] [1, 96, 10, 18] Layer name: module.6, layer type: Conv2D, input shape: [[1, 96, 10, 18]] Layer name: module.7, layer type: Activation, input shape: [[1, 128, 4, 8]] [1, 128, 4, 8] Layer name: module.8, layer type: Conv2D, input shape: [[1, 128, 4, 8]] Layer name: module.9, layer type: Activation, input shape: [[1, 256, 1, 3]] Layer name: module.11, layer type: Dense, input shape: [[1, 256, 1, 3]] Layer name: module.12, layer type: Activation, input shape: [[1, 128]] Layer name: module.13, layer type: Dense, input shape: [[1, 128]] Layer name: module.14, layer type: Activation, input shape: [[1, 64]] Layer name: module.15, layer type: Dense, input shape: [[1, 64]] Layer name: module.16, layer type: Activation, input shape: [[1, 16]] Layer name: module.17, layer type: Dense, input shape: [[1, 16]] Creating HLS model <hls4ml.model.hls_model.HLSModel object at 0x7f58663b0d30>

Since the other length didn't take into account the size of the batch where pytorch inputs are [Batch Size, Channels, Height, Width] Not sure if it was a bug or an intended feature that I used wrong though so I would appreciate some thoughts on this, again I'm pretty new to GitHub/SW development.

— Reply to this email directly, view it on GitHub https://github.com/fastmachinelearning/hls4ml/issues/526, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABSUVAVJZOZXJV333L7KBYLVFBXMPANCNFSM5TOWD5WA. You are receiving this because you are subscribed to this thread.

jmitrevs avatar Apr 14 '22 20:04 jmitrevs

Hi @kevin-mahon , thanks for your sharing. I am also face the pytorch conversion issue. The error shows like this: TypeError: convert_from_pytorch_model() missing 1 required positional argument: 'input_shape'

Would you mind sharing your solutions? Thanks

lloo099 avatar Apr 24 '22 09:04 lloo099

@lloo099 Hello, I used (channels, height, width) in the lines:

config = hls4ml.utils.config_from_pytorch_model(model, granularity='model') hlsmodel = hls4ml.converters.convert_from_pytorch_model(model, (1, 90,160), hls_config=config)

kevin-mahon avatar Apr 26 '22 19:04 kevin-mahon

@lloo099 Hello, I used (channels, height, width) in the lines:

config = hls4ml.utils.config_from_pytorch_model(model, granularity='model') hlsmodel = hls4ml.converters.convert_from_pytorch_model(model, (1, 90,160), hls_config=config)

Thanks a lot. But if u try to use this command: config = hls4ml.utils.config_from_pytorch_model('three_layer_model.pt') hlsmodel = hls4ml.converters.pytorch_to_hls(config) It needs the input shape which specified in the config, but I use the template from here

If u can run it successfully, could u tell me ur way? Thanks again

lloo099 avatar Apr 27 '22 08:04 lloo099