Michael Deistler
Michael Deistler
Need to double check, have some worrying results that there might be a bug
For very big datasets, it is undesirable to copy them. @tbmiller-astro suggested that some copies could be removed. Here are some candidates where we create local copies of the dataset:...
- should our tests be bimodal? - slow tests for all pyro samplers (hmc requires gradients...) - should we start thinking about a `v1.0.0` release? - progress bar for rejection...
This should happen only after we have moved to pyroflows
1) We should provide a tutorial on how to use embedding nets. 2) We should have pre-configuered RNNs and CNNs. See also #162
Currently, we are relying on `nflows` for our flows. I think that pyro's flows look pretty great, see e.g. [here](http://docs.pyro.ai/en/latest/distributions.html#spline) for NSF. They very nicely separate the transformer from the...
We might want to use `nsf` as default density estimator. It has almost always better performance at the cost of being more computationally expensive.
Currently, in our simple interface, we only allow very few kwargs. We could add two arguments `init_kwargs` and `call_kwargs`, which would both be dictionaries. Not sure if we want this...
SNPE-B is currently not implemented. When implementing it take care with the following: - we evaluate the importance weights always on the last posterior, no matter what round the samples...
OS: Ubuntu 20.04 LTS python: 3.8 ### Setup I installed within a conda environment with: ``` sudo apt-get install libxml2-dev libxslt-dev pip3 install svgutils --user ``` ### Problem ```python #!/usr/bin/env...