lava-dl icon indicating copy to clipboard operation
lava-dl copied to clipboard

SLAYER CuBa Conv block with spikes

Open alexggener opened this issue 2 years ago • 1 comments

Hi.

I'm trying to replicate the SNN network SpikeMS (https://github.com/prgumd/SpikeMS) developed in SLAYERpytorch with the lava-dl. The SpikeMS implementation contains 6 Convolutional layers that process spikes in the following Tensor format Tensor=[n_channels, height, width, num_time_bins ] as in NMNIST dataset lava-dl provides.

The current SNN implementation I have for the moment is the following:

self.blocks = torch.nn.ModuleList([

        slayer.block.cuba.Conv(neuron_params_drop_conv1,  2, 16, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv1, 16, 32, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv1, 32, 64, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv1, 64, 32, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv1, 32, 16, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv1, 16,  2, 3, 2, 0, dilation=1, groups=1, weight_scale=1),

And my input spike tensor is input_spikes = Tensor[bach, n_channels*height*width, num_time_bins]. Of course this does not work since dimensions of the input_spikes and the first layer are not the same, so I've tried to add a dummy Dense Layer as follows

self.blocks = torch.nn.ModuleList([
        # Input dummy layer
        slayer.block.cuba.Dense(neuron_input_params_drop, 2*144*256, 16*2*3*3), # channels=2, height=144, width=256
        # Autoencoder layers
        slayer.block.cuba.Conv(neuron_params_drop_conv1,  2, 16, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv2, 16, 32, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv3, 32, 64, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv4, 64, 32, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv5, 32, 16, 3, 2, 0, dilation=1, groups=1, weight_scale=1),
        slayer.block.cuba.Conv(neuron_params_drop_conv6, 16,  2, 3, 2, 0, dilation=1, groups=1, weight_scale=1),

However, dimensions do not fit between Dense layer and Conv layer.

My question is the following: How is slayer doing to manage time spike tensors with conv blocks? and What is the proper way of preparing input tensors of spikes for Conv blocks with slayer?

Proposal: A section in the documentation explaining how tensors are managed for each block and what is the proper Input and Output for them.

Thanks a million in advance,

Alex.

Objective of issue: Improve documentation about the use of CuBa blocks

Lava DL version:

  • [ ] 0.3.0 (feature release)
  • [ ] 0.2.1 (bug fixes)
  • [x] 0.2.0 (current version)
  • [ ] 0.1.2

Lava version:

  • [ ] 0.4.0 (feature release)
  • [ ] 0.3.1 (bug fixes)
  • [x] 0.3.0 (current version)
  • [ ] 0.2.0
  • [ ] 0.1.2

I'm submitting a ...

  • [ ] bug report
  • [x] feature request
  • [x] documentation request

alexggener avatar May 17 '22 11:05 alexggener

Refer here: https://github.com/lava-nc/lava-dl/discussions/60#discussioncomment-2769162

bamsumit avatar May 17 '22 16:05 bamsumit