hybrid-snn-conversion
hybrid-snn-conversion copied to clipboard
SNN Retraining Issue
Hello.
I trained ann model for CIFAR10 by using ann.py.
After that, I run snn.py to train SNN by STDB.
Converting CNN to SNN works fine. However, accuracy continues to decrease as epoch continues. The results are the same no matter how small the learning rate is set. Even if I train the SNN with linear activation, the result is the same.
How can I solve this problem? I ask for your help, because the learning of SNN is not progressing,
Thank you.
Can you provide some more details like the architecture, number of timesteps, dataset, CNN accuracy, converted SNN accuracy, optimizer and other hyperparameters
-
ANN training script for training ANN python ann.py --architecture VGG16 --learning_rate 1e-2 --epochs 100 --lr_interval '0.60 0.80 0.90' --lr_reduce 10 --dataset CIFAR10 --batch_size 64 --optimizer SGD --dropout 0.3
Accuracy of ANN: 91% (test accuracy)
-
SNN training (case 1 - large learning rate) script for training SNN python snn.py --architecture VGG16 --learning_rate 1e-6 --epochs 100 --lr_interval '0.60 0.80 0.90' --lr_reduce 10 --dataset CIFAR10 --batch_size 50 --optimizer SGD --timesteps 100 --leak 1.0 --scaling_factor 0.7 --dropout 0.3 --kernel_size 3 --devices 0 --pretrained_ann './trained_models/ann/ann_vgg16_cifar10.pth' --log --activation STDB
Accuracy of Converted SNN: 89% (test accuracy) Accuracy of Converted SNN after training: 10% (test accuracy)
-
SNN training (case 2 - large learning rate) script for training SNN python snn.py --architecture VGG16 --learning_rate 1e-12 --epochs 100 --lr_interval '0.60 0.80 0.90' --lr_reduce 10 --dataset CIFAR10 --batch_size 50 --optimizer SGD --timesteps 100 --leak 1.0 --scaling_factor 0.7 --dropout 0.3 --kernel_size 3 --devices 0 --pretrained_ann './trained_models/ann/ann_vgg16_cifar10.pth' --log --activation STDB
Accuracy of Converted SNN: 89% (test accuracy) Accuracy of Converted SNN after training: 70% (test accuracy)
-
There is no increase in accuracy even if the leak value is changed under the same conditions as 2 and 3, or the activation is changed to linear.
You can try changing the activation to 'Linear' and optimizer to 'Adam' for SNN training. Keep the learning rate at '1e-4'
I tried it!
The results are as follows.
-
ANN training script for training ANN python ann.py --architecture VGG16 --learning_rate 1e-2 --epochs 100 --lr_interval '0.60 0.80 0.90' --lr_reduce 10 --dataset CIFAR10 --batch_size 64 --optimizer SGD --dropout 0.3
Accuracy of ANN: 91% (test accuracy)
-
SNN training script for training SNN python snn.py --architecture VGG16 --learning_rate 1e-4 --epochs 100 --lr_interval '0.60 0.80 0.90' --lr_reduce 10 --dataset CIFAR10 --batch_size 50 --optimizer Adam --timesteps 100 --leak 1.0 --scaling_factor 0.7 --dropout 0.3 --kernel_size 3 --devices 0 --pretrained_ann './trained_models/ann/ann_vgg16_cifar10.pth' --log --activation Linear
Accuracy of Converted SNN after training (after 6 epochs): 13.7% (train accuracy)
Still, when learning SNN, the accuracy of the SNN is low. Is there any difference between the code you have and the code uploaded to github? Any help in this situation would be greatly appreciated.
I am very curious, why the accuracy of running snn.py alone is even better than running ann.py first and then running snn.py
Is the problem with VGG16 or also with the other VGG architectures? Could someone maybe post the script for training a smaller network (e.g. VGG11 or even VGG5) for first training an ANN and then converting to an SNN or directly training an SNN?