Emile van Krieken
Emile van Krieken
There should be a tutorial on how to walk the stochastic computation graph, given a node. This would allow people to more easily create complex local baselines and critics.
There should be a good explanation/tutorial on credit assignment, showing the difference when not splitting up different loss functions to allow storchastic to properly assign credit.
Not sure what this thing is called, but this paper: https://arxiv.org/abs/1806.02867
See https://arxiv.org/abs/1807.11143 and http://proceedings.mlr.press/v97/yin19c/yin19c.pdf
After training for a while, some methods (at least ScoreFunction with no baseline and LAX) on the normal vae will crash using: ``` /opt/conda/conda-bld/pytorch_1579040055865/work/aten/src/THCUNN/BCECriterion.cu:57: void bce_updateOutput_no_reduce_functor::operator()(const Dtype *, const Dtype...
For advanced users, there should be a storchastic version that does not monkey patch. As a lot of code relies on monkey patched code right now, this could require a...
The generic backwards call that collects all losses and just calls backward on it, is incompatible with the implementation of REBAR. We require computing the gradients with respect to the...
For some reason, `Tensor.backward` is not monkey patched: Putting a `storch.Tensor` as input to `Tensor.backward` crashes.
TorchScript and Storchastic are fundamentally incompatible. Problems - [ ] Monkey patching built-in methods makes TorchScript unable to recognize these methods. Example to reproduce: First import storch, so that monkey...
Write a __iter__ for storch tensors, possibly by iterating over all event dims and properly wrapping everything. Is this allowed? It exposes the length of the torch tensor, but that's...