probability icon indicating copy to clipboard operation
probability copied to clipboard

Feature request: Different activation functions for different hidden layers in tfp.bijectors.AutoregressiveNetwork

Open anirban-mukherjee opened this issue 4 years ago • 1 comments

I would like to propose the following enhancement. In tfp.bijectors.AutoregressiveNetwork (https://www.tensorflow.org/probability/api_docs/python/tfp/bijectors/AutoregressiveNetwork) there does not seem to be a way to specify different activations for different hidden layers. Specifically, hidden_units allows for the specification of a network with multiple hidden layers. E.g., [10, 10] specifies two hidden layers with 10 units each. activation though does not take a list of activations. Instead, it takes a single activation function, which it applies to all hidden layers. It would be useful to be able to specify different activations for different hidden layers.

anirban-mukherjee avatar Feb 23 '22 05:02 anirban-mukherjee

Along the same lines, would it be feasible to specify other layer features (e.g. different initialisers, dropout) for the feedforward networks that describe the conditional probabilities?

anirban-mukherjee avatar Mar 02 '22 03:03 anirban-mukherjee