Christopher Suter
Christopher Suter
There's some historical context [here](https://github.com/tensorflow/tensorflow/issues/4638), and (substantially) more [here](https://github.com/tensorflow/tensorflow/issues/206). In particular, @shoyer points to [a NEP](http://www.numpy.org/neps/nep-0021-advanced-indexing.html) discussing a proposal to work around shortcomings of numpy advanced indexing. I know nothing...
can you use `log_normal.copy()`?
actually, this isn't a deep copy. it will reuse the parameters passed to the `log_normal` constructor. maybe it will suffice for your use case, though.
The underlying machinery wants components_distribution to be manifestly "factorized" (or, at least "factorizable"). MVNDiag actually *should* qualify, but isn't implemented as such. A somewhat hacky workaround would be to replace...
Can you try saving with the argument `save_format='h5'`? This works for me. You will also need to change your Dense layer size I think. Use `params_size` to automatically get the...
Most TFP distributions will happily compute and return the "natural" formula for (log) prob outside of the support. This is an intentional, if somewhat surprising, choice. There are at least...
One more note: if you enable the `validate_args` flag, which is false by default for the same reasons given in (1) above, then (IIRC) most if not all distributions will...
please feel free to raise problem/application-specific questions here or on the mailing list ([email protected]). people are generally eager to jump in with ideas/solutions and we all love thinking about this...
Can you do ```python params, losses = tfp.math.minimize_stateless( lambda x: loss_fun(x, a), init_params(), num_steps=1000, optimizer=optimizer) ```
What issue are you running into implementing `sample`? Can you maybe post some code that isn't working but you think should?