Feature request: Different activation functions for different hidden layers in tfp.bijectors.AutoregressiveNetwork
I would like to propose the following enhancement. In tfp.bijectors.AutoregressiveNetwork (https://www.tensorflow.org/probability/api_docs/python/tfp/bijectors/AutoregressiveNetwork) there does not seem to be a way to specify different activations for different hidden layers. Specifically, hidden_units allows for the specification of a network with multiple hidden layers. E.g., [10, 10] specifies two hidden layers with 10 units each. activation though does not take a list of activations. Instead, it takes a single activation function, which it applies to all hidden layers. It would be useful to be able to specify different activations for different hidden layers.
Along the same lines, would it be feasible to specify other layer features (e.g. different initialisers, dropout) for the feedforward networks that describe the conditional probabilities?