keras-spiking
keras-spiking copied to clipboard
CNN example
Hi community, excellent work. I am curious if this activation layer can be applied in a CNN network. Is there any example similar to this https://www.nengo.ai/keras-spiking/examples/spiking-fashion-mnist.html but using CNNs?
The input to the network would be of (T,28,28,1) dimensionality if we follow the previous example. How would the network have to be modified?
Best regards
Yes, the spiking activation layers can be used with any other layer types. In that example you linked you could replace the Dense
layers with Conv2D
layers (and remove the Flatten
/Reshape
layers, as they are no longer needed). Or you could just take any Keras CNN example, and use SpikingActivation
layers instead of the standard activations.
Okey, thanks for the quick answer. My question is the following, I have implemented a CNN with Conv2D layers inside TimeDistributed followed by Pooling layers inside TimeDistributed. As you can see, my image is 64 x 50 and it is repeated 10 times as explained in the MNIST manual. How do I do the last pooling before the classification layer? I have solved this with a Flatten layer but I think it is not the right thing to do. Any suggestions?
Find the architecture below:
model = tf.keras.Sequential(
[
TimeDistributed(Conv2D(32, 3, padding='same', input_shape=(10, 64, 50, 1))),
keras_spiking.SpikingActivation("relu", spiking_aware_training=False),
TimeDistributed(MaxPool2D(pool_size=2)),
TimeDistributed(Conv2D(64, 3, padding='same')),
keras_spiking.SpikingActivation("relu", spiking_aware_training=False),
TimeDistributed(MaxPool2D(pool_size=2)),
TimeDistributed(Conv2D(128, 3, padding='same')),
keras_spiking.SpikingActivation("relu", spiking_aware_training=False),
TimeDistributed(MaxPool2D(pool_size=2)),
TimeDistributed(Conv2D(256, 3, padding='same')),
keras_spiking.SpikingActivation("relu", spiking_aware_training=False),
TimeDistributed(MaxPool2D(pool_size=2)),
Flatten(),
Dense(10)
]
Hi, I manage to train the CNN I explained in the previous comment.
However, when launching check_output
I had to change this line:
if has_global_average_pooling:
# check test accuracy using average output over all timesteps
predictions = np.argmax(output.mean(axis=1), axis=-1)
else:
# check test accuracy using output from only the last timestep
# predictions = np.argmax(output[:, -1], axis=-1) # ORIGINAL
predictions = np.argmax(output, axis=-1) # MY LINE
output
had a size of (N_SAMPLES, 10) so I had to do this modification to get the label information.
I do not understand why I had to make this change. I the shape of output
correct?
Does the modification make sense? Thanks for the support