NIR icon indicating copy to clipboard operation
NIR copied to clipboard

infer_type fails with Tuple index out of range error.

Open sheiksadique opened this issue 2 years ago • 9 comments
trafficstars

When trying to generate NIR graph in the paper/02_cnn/mint_sinabs_to_nir.py, the script fails on trying to save the graph.

The change that needed to be accommodated from sinabs end was to add the input_shape parameter. I added the parameter and set it to None.

sinabs version: latest commit on the dev/nir branch. sinabs-dynapcnn: from pip (NOTE: These two packages need to be installed fully (pip install . and not pip install -e .)and do not work with dev install.)

NIRTorch: latest commit on main branch. nir: latest commit on paper branch.

I also tried just running the infer_types method and also ran into the same error message. So believe this is the root of the problem.

sheiksadique avatar Oct 18 '23 09:10 sheiksadique

I see that the input type for the input node is (1, 2, 32, 32). So the batch dim is appearing there. I assume this is not supposed to be there?

sheiksadique avatar Oct 18 '23 10:10 sheiksadique

Yeah, the batch dimension shouldn't be there

stevenabreu7 avatar Oct 18 '23 10:10 stevenabreu7

Could the batch dimension be related to the batch dimension in the sample data? On line 40 it's torch.rand((1, 2, 34, 34)).

Jegp avatar Oct 18 '23 11:10 Jegp

Torch models still expect input with the batch dimension included. It should be ignored only at the conversion stage to NIR.

sheiksadique avatar Oct 18 '23 11:10 sheiksadique

Is that true? This code works well for me:

import torch
torch.nn.Linear(1, 2)(torch.zeros(1))

Jegp avatar Oct 19 '23 01:10 Jegp

I get the same error when running infer_type() on an NIRGraph generated by snntorch.export_to_nir() for an input of the following size: [1, 150, 2, 32, 32] including the batch size dimension. Is there a current fix for this?

SirineArfa avatar Apr 03 '24 12:04 SirineArfa

I get the same error when running infer_type() on an NIRGraph generated by snntorch.export_to_nir() for an input of the following size: [1, 150, 2, 32, 32] including the batch size dimension. Is there a current fix for this?

I found a current fix for this --> Adjust (if necessary and only in case of mismatch) dimensions of padding, dilation, and stride in ir._calculate_conv_output() to match the dimension of input_shape

SirineArfa avatar Apr 04 '24 13:04 SirineArfa

To re-open this, do we agree that all NIR graphs should have the batch dimension removed? Or is this something that should go into the newly added metadata field?

We could also add a helper method to add/remove the batch dimension from a NIR graph (should not be too difficult to simply do something like unsqueeze(0) to all nodes in the graph).

Thoughts? @Jegp @SirineArfa @matjobst (and anyone else)

stevenabreu7 avatar Apr 24 '24 07:04 stevenabreu7

Happy to put that in the spec. Ideally, the graphs should be independent of batches, IMO. Shouldn't batches be independent of the computation? We're not doing any batch norming in NIR, for instance.

Jegp avatar Apr 24 '24 15:04 Jegp