Relation between this repository and the original C++ implementation
Does this PyTorch implementation rely on the original version? Is it an independent implementation that has been re-implemented using PyTorch?
Hi @zhangsen-hit this implementation uses some of the accelerated components from the original C++ implementation. This version is more feature-rich. You might want to look at Lava-DL SLAYER for even more feature-rich version.
Thank you very much! I have noticed that Lava-DL SLAYER provides more neuron models and other useful features, which is highly beneficial.
Here I have another question. In both this repository and Lava-DL SLAYER, the input and output data types are tensors in the format [NCHWT] or [NCT], where 'T' represents the time dimension and is placed at the end. So, during the forward propagation process, is the computation performed layer by layer, rather than time step by time step? Can we obtain the results of any time steps before the entire forward propagation is finished?
@zhangsen-hit slayerPyTorch computes the output of each layer for all the timesteps at once. Lava-dl SLAYER allows for both options. By default, it calculates the output of all the outputs at once, but all neuron models have persistent_mode flag. When you set it, you can execute the entire network one time-step at a time if needed.
I could not find the term persistent_mode in the source code, but I did find persistent_state. Are you referring to persistent_state?
Furthermore, upon reviewing the source code of lava-dl, I did not find the SRM neuron mentioned in the original paper SLAYER and implemented in slayerPytorch. Could you please clarify why this neuron type was abandoned? Or, was the name 'SRM' replaced with another term?