Yanqi Chen(陈彦骐)
Yanqi Chen(陈彦骐)
Add models with spike-triggered adaptation. | Origin Model | Adaptive Model | | ------------ | -------------- | | LIF | Adaptive LIF | | QIF | Izhikevich | | EIF...
框架里的Python版本神经元用的是nn.Module而不是LISNN代码里的torch.autograd.Function来写神经元的状态更新,这两个似乎性能区别不大?我记得早期SpikingFlow版本是类似forward和backward分开写的。 @fangwei123456
Thanks for your advice! We have realized that unsupervised learning rules such as STDP and other Hebbian-based rules are hot areas. These are on our roadmap, while it will not...
Duplicate of #161
Surrogate gradient just handles the non-differentiable gradient of spike w.r.t. potential. It can be coupled with different learning (credit assignment) rules like STBP or SLAYER.
Hi, this could be a simple example: ```python import torch from spikingjelly.clock_driven import encoding encoder = encoding.WeightedPhaseEncoder(6) x = torch.rand(2,2) # Eg: tensor([[0.4883, 0.7481], [0.1386, 0.9809]]) # 0.4883 (10) =...
> Hello! Excuse me, can you run it successfully? Error: 'WeightedPhaseEncoder' object has no attribute 'phase'. Can you see what the error is? I fixed it a moment earlier in...
> ``` > class NeuNorm(nn.Module): > def __init__(self, in_channels, height, width, k=0.9): > super().__init__() > self.x = 0 > self.k0 = k > self.k1 = (1 - self.k0) / in_channels**2...
To ensure forward compatibility, the default value of `v_rest` can be set to the same as the `v_reset`.
Hi, each trial in this work resulted in a sparse model. Do you mean sharing a pretrained dense model or sparse model?