nengo-dl icon indicating copy to clipboard operation
nengo-dl copied to clipboard

Deep learning integration for Nengo

Results 32 nengo-dl issues
Sort by recently updated
recently updated
newest added

Motivation and context: There is a (tentatively) accepted pull request for adding[ grouped convolutions to nengo](https://github.com/nengo/nengo/tree/convolution-groups-ci) already. Before adding grouped convolutions to nengo-loihi it is neccessary to add grouped convolutions...

If you convert a Keras model where there is a connection from a neurons layer (e.g., dense, conv2D, etc.) to a layer that cannot be natively converted to a Nengo...

In [Coming from Nengo to NengoDL](https://github.com/nengo/nengo-dl/blob/511aa4e2052632f16523779604bf484a1e4ab4a8/docs/examples/from-nengo.ipynb) there is a small mistake. You said that you will use `RMSProps` but in the code you used `Adam`:

Trying to use the NengoDL converter with the built-in EfficientNet TensorFlow networks will cause a failure with this error: ``` :\xchoo\git\nengo-dl\nengo_dl\converter.py:324: UserWarning: Layer type Rescaling does not have a registered...

Dependencies: - `tensorflow==2.3.1` - `nengo_dl==3.4.0` This is a stripped-down version of https://www.nengo.ai/nengo-dl/examples/keras-to-snn.html with a `do_bug` flag added. This code evaluates the test accuracy using non-spiking activations with two different values...

NengoDL expects a Node with no input connection to receive an input during training/inference even if the node is generating an internal constant. There is a workaround using a lambda...

Right now it's possible to use `tf.keras.callbacks.ModelCheckpoint` to save (keras) model weights during training under NengoDL, and then use `sim.keras_model.load_weights(...)` afterwards. However it might be nice to have our own...

Currently there isn't an easily accessible or documented way to train with spikes on the forward pass. It seems like it can be done by patching in your own builder...

According to the documentation: https://github.com/nengo/nengo-dl/blob/e9b359a4778aa7812488d9a40a8118c0b92b18ee/nengo_dl/converter.py#L69-L79 > Nonlinear neuron types (e.g. LIF) will be skewed by this linear scaling on the input/output. But this does not need to be the case....

Related to #206. Currently the nengo_dl converter's `scale_firing_rates` will gladly change the effective response that you get from any nonlinearity except for the linear/ReLU activations. There is this warning: https://github.com/nengo/nengo-dl/blob/e9b359a4778aa7812488d9a40a8118c0b92b18ee/nengo_dl/converter.py#L568-L573...