NIR
NIR copied to clipboard
infer_type fails with Tuple index out of range error.
When trying to generate NIR graph in the paper/02_cnn/mint_sinabs_to_nir.py, the script fails on trying to save the graph.
The change that needed to be accommodated from sinabs end was to add the input_shape parameter. I added the parameter and set it to None.
sinabs version: latest commit on the dev/nir branch.
sinabs-dynapcnn: from pip
(NOTE: These two packages need to be installed fully (pip install . and not pip install -e .)and do not work with dev install.)
NIRTorch: latest commit on main branch. nir: latest commit on paper branch.
I also tried just running the infer_types method and also ran into the same error message. So believe this is the root of the problem.
I see that the input type for the input node is (1, 2, 32, 32). So the batch dim is appearing there. I assume this is not supposed to be there?
Yeah, the batch dimension shouldn't be there
Could the batch dimension be related to the batch dimension in the sample data? On line 40 it's torch.rand((1, 2, 34, 34)).
Torch models still expect input with the batch dimension included. It should be ignored only at the conversion stage to NIR.
Is that true? This code works well for me:
import torch
torch.nn.Linear(1, 2)(torch.zeros(1))
I get the same error when running infer_type() on an NIRGraph generated by snntorch.export_to_nir() for an input of the following size: [1, 150, 2, 32, 32] including the batch size dimension. Is there a current fix for this?
I get the same error when running infer_type() on an NIRGraph generated by snntorch.export_to_nir() for an input of the following size: [1, 150, 2, 32, 32] including the batch size dimension. Is there a current fix for this?
I found a current fix for this --> Adjust (if necessary and only in case of mismatch) dimensions of padding, dilation, and stride in ir._calculate_conv_output() to match the dimension of input_shape
To re-open this, do we agree that all NIR graphs should have the batch dimension removed? Or is this something that should go into the newly added metadata field?
We could also add a helper method to add/remove the batch dimension from a NIR graph (should not be too difficult to simply do something like unsqueeze(0) to all nodes in the graph).
Thoughts? @Jegp @SirineArfa @matjobst (and anyone else)
Happy to put that in the spec. Ideally, the graphs should be independent of batches, IMO. Shouldn't batches be independent of the computation? We're not doing any batch norming in NIR, for instance.