NIR icon indicating copy to clipboard operation
NIR copied to clipboard

'int' object has no attribute 'item' on conv shapes calculation

Open chanokin opened this issue 1 year ago • 4 comments

This line assumes that the product of the operation/indexing is going to be a tensor/array

https://github.com/neuromorphs/NIR/blob/df2f1fc863557af1c318dfe20f9c3f2b6e5e54db/nir/ir/utils.py#L65

However, I don't know how, I end up with lists here and the elements do not have the item method available. You could just check if the shape object is an ndarray | tensor or it has a the item attribute before calling it, otherwise just grab the element?

chanokin avatar Apr 22 '24 14:04 chanokin

Hi there, thanks a lot for catching this! Could you add a simple test input that gives you this failure? And is the shape variable a list, or a single int? I don't immediately see why it would be a list, since we make sure that it would be a np.array here:

https://github.com/neuromorphs/NIR/blob/df2f1fc863557af1c318dfe20f9c3f2b6e5e54db/nir/ir/utils.py#L100

stevenabreu7 avatar Apr 24 '24 07:04 stevenabreu7

I just looked again and the input_shape type is no int nor list nor tuple it's a lovely torch.Size type. ~~I would have thought that type would get bundled with the Sequence type in Line 91?~~

I think this comes from https://github.com/neuromorphs/NIR/blob/df2f1fc863557af1c318dfe20f9c3f2b6e5e54db/nir/ir/utils.py#L53

when padding == "same" and there is no conversion to np.array([input_shape[i]])

chanokin avatar Apr 24 '24 14:04 chanokin

I made a PR to fix this, see #101, could you please confirm that it works for you now? Then I will merge it. Thanks again for your help!

stevenabreu7 avatar Apr 28 '24 08:04 stevenabreu7

Hi, sadly, it's still no good.

Here's a minimal test

from nir.ir.graph import NIRGraph
from nir.ir.conv import Conv2d
from nir.ir.neuron import LI

import torch


def test_nir_conv2d_same():
    # Create a NIR Network
    conv_weights = torch.tensor([[[1.0, 1.0, 1.0], [1.0, 1.0, 1.0], [1.0, 1.0, 1.0]]])
    li_tau = torch.tensor([0.9, 0.8])
    li_r = torch.tensor([1.0, 1.0])
    li_v_leak = torch.tensor([0.0, 0.0])

    nir_network = NIRGraph.from_list(
        Conv2d(
            input_shape=torch.Size((3, 3)),
            weight=conv_weights,
            stride=1,
            padding="same",
            dilation=1,
            groups=1,
            bias=torch.tensor([0.0] * 9),
        ),
        LI(li_tau, li_r, li_v_leak),
    )

I think that's because _index_tuple still returns a pure int in https://github.com/neuromorphs/NIR/blob/df2f1fc863557af1c318dfe20f9c3f2b6e5e54db/nir/ir/utils.py#L97-L98

I don't know what the design phylosophy is here but I assume you could return a np.array([tuple[index]])

chanokin avatar May 01 '24 10:05 chanokin

Thank you so much for the insight @chanokin. You're indeed right that we should be returning a Numpy number. And there was an additional bug in that it assumed numpy types and not PyTorch arrays or the like. I expanded the function and added some tests to make sure it works.

Please don't hesitate to reopen if anything is broken. And ping us if you'd like a release. Otherwise, I'll wait a few weeks until the next exciting new features drop ;-)

Jegp avatar Jun 01 '24 23:06 Jegp