SNN_Calibration icon indicating copy to clipboard operation
SNN_Calibration copied to clipboard

Training Code

Open JominWink opened this issue 2 years ago • 9 comments

Hello, when I run the program, an error occurred "AttributeError: Can 't pickle local object' SubPolicy. Just set the < locals >. < lambda > '", don't know if you Can help me to solve it? Thank you very much!

JominWink avatar Mar 27 '22 05:03 JominWink

Sounds like an error from autoaugment.

A quick solution would be, to avoid using autoaugment in your CIFAR data loader, you can set it to False. But it may not exactly reproduce the results from README.

Or if you want to import autoaugment, can you print your full log here? I am not able to see which line cause the error. Thanks.

yhhhli avatar Mar 27 '22 14:03 yhhhli

Hello, why is the accuracy of SNN using BN alone only about 46% on average?

JominWink avatar Mar 29 '22 14:03 JominWink

Do you mean ANN trained with BN has low conversion accuracy?

We tried to analyze the difference between ANN w/ BN and ANN w/o BN, but we could not find any explicit difference. Their activation distribution looks similar. We can only say that ANN w/ BN has more activation mismatch during conversion, thus our calibration has more improvement.

yhhhli avatar Mar 29 '22 15:03 yhhhli

Yes, the effect of Light and Advanced calibration optimization is really obvious, and the accuracy of Light and Advanced calibration by adding Usebn is similar to the effect of the paper. The ANN conversion accuracy can reach 86.111% without useBN and any calibration, so the reason for the low conversion accuracy with UseBN is not clear.

JominWink avatar Mar 29 '22 15:03 JominWink

The ANN conversion accuracy can reach 86.111% without useBN and any calibration, so the reason for the low conversion accuracy with UseBN is not clear.


Yes, the reason is unclear. We tried, but we could not figure it out. We just notice that, before our paper, no one use ANN w/ BN to do the conversion, we use our calibration to solve this problem but the cause is indeed unclear. Sorry about this.

The underlying reason could be a potential research topic.

yhhhli avatar Mar 29 '22 15:03 yhhhli

Another comment: The ANN w/ BN has lower conversion accuracy in early time steps, but in higher time steps it has better performance than ANN w/o BN. So I'm sure that studying ANN-SNN w/ BN could be promising.

yhhhli avatar Mar 29 '22 15:03 yhhhli

Ok, the problem I first raised was that lambda could not be serialized, but when I run the code the other day, it seems to solve the problem automatically, so I can continue to discuss the problem with you now. As for the influence of Ann-to-SNN coding on accuracy, what do you think about coding? The constant encoding used in this paper is also not very clear for encoding as pulses.

JominWink avatar Mar 29 '22 15:03 JominWink

Ok, so for the talk where the time step is a little bit longer, I'm going to try to make the time step a little bit longer to see what happens.

JominWink avatar Mar 29 '22 15:03 JominWink

Maybe you can try Poisson encoding with calibration, I think it can also improve plain Poisson encoding.

Personally speaking, I am not very fond of Poisson encoding. First, it is very inefficient on real hardware because you have to generate many random numbers, and second, people tend to convert it to ternary pulses (+1, -1, 0), and it drops too much performance.

yhhhli avatar Apr 03 '22 00:04 yhhhli