Spike-Element-Wise-ResNet
Spike-Element-Wise-ResNet copied to clipboard
Can you provide training DVS-CIFAR10 datasets hyperparameters?
You can get them at https://github.com/fangwei123456/Spike-Element-Wise-ResNet/blob/main/origin_logs/cifar10dvs/SEWResNet_ADD_T_16_T_train_None_SGD_lr_0.01_CosALR_64_amp/args.txt
This issue is also helpful: https://github.com/fangwei123456/Spike-Element-Wise-ResNet/issues/1
thank you
This is the loris: https://github.com/neuromorphic-paris/loris
If you want to use the old version of SJ, you need to install it. The new version does not need it.
I recommend to use the new version of SJ to avoid the cext neuron problem (refer to this https://github.com/fangwei123456/spikingjelly/issues/46). In the new version, we use cupy to implement CUDA neuron, which avoids the compiling error of cext neuron that makes troubles to users.
Thank you very much for your patient answer. Just now, I had A problem with Loris, and after I solved it, I deleted the question just now (I felt A little stupid). I tried to install the latest version of SJ through “pip install SJ or git clone&& cd&& python setup.py install” , but I didn't skip the problem of Loris. I just installed Loris by switching versions and compiled this sentence. I continue to try to run SEW on my computer
Try to run pip install spikingjelly -U
.
Thank you for your kindly help, there may be some thing wrong, the best score I got is 72.5(the score is 74.4 in your work). I use the latest code and SJ, since the cext is not support as you mentioned here(https://github.com/fangwei123456/Spike-Element-Wise-ResNet/issues/1#issuecomment-1041061312), I replace the cext.neuron.MultiStepParametricLIFNode with clock_driven.neuron.MultiStepLIFNode as you did in (https://github.com/fangwei123456/Spike-Element-Wise-ResNet/files/8076489/dvsgesture.zip). And my parameters are the same with you. I copy the latest epoch's message: Namespace(T=16, T_max=64, T_train=None, amp=True, b=16, cnf='ADD', data_dir='/home/shb/datasets/CIFA10DVS', device='cuda:0', dts_cache='./dts_cache', epochs=64, gamma=0.1, j=4, lr=0.01, lr_scheduler='CosALR', model='SEWResNet', momentum=0.9, opt='SGD', out_dir='./logs', resume=None, step_size=32) ./logs/SEWResNet_ADD_T_16_T_train_None_SGD_lr_0.01_CosALR_64_amp epoch=63, train_loss=0.0028808565140831088, train_acc=0.9997775800711743, test_loss=1.3776003908813, test_acc=0.719, max_test_acc=0.725, total_time=148.61892414093018, escape_time=2022-03-12 00:25:44
I just re-train this network with the current version of SJ and get 73.8 acc1. I still use the PLIF neuron. I think the accuracy around ±74 is enough.
python train.py -T 16 -data_dir /datasets/CIFAR10DVS/ -amp -lr 0.01 -cnf ADD -model SEWResNet -j 8
...
epoch=63, train_loss=0.0034258745053909003, train_acc=0.999443950177936, test_loss=1.4653774447441101, test_acc=0.722, max_test_acc=0.738, total_time=200.12506437301636, escape_time=2022-03-12 16:55:03
Here are logs and codes: cifar10dvs.zip
I notice that the origin version of this repo has some stochastic behaviors, which may cause some troubles for reproducing the idential results. Refer to the bug found at 2021-12-10 in bugs.md. These stochastic behaviors are even not controled by random seeds.
In the current version of SJ, you can try different random seeds and may get higher accuracy.