Jan
Jan
Yes, that's a very good idea! I have code for SBC by Talts et al.. This would require some hyperparameter decision which we should discuss. But the coding should be...
I also just noticed that our logging is very rudimentary, e.g., we only log the number of epochs trained, and the single best validation performance. It would be much better...
Thanks for reporting this! I had a look at your example. It seems the `logits` of the MOG proposal, https://github.com/mackelab/sbi/blob/bb6150e54a0ba2e7c15432d52f53c130ced2a63c/sbi/inference/snpe/snpe_c.py#L555-L620 take `inf` values for some parameters sometimes. We will have...
Thanks for the additional info that it is not restricted to MDNs, but happens with NSF as well. We will have a look before the next release (which we are...
I ran a couple of experiments using a linear Gaussian simulator with increasing dimensions and increasing width of the prior. In this setting the problem occurred only for `mdn` during...
I see, interesting. Before you were using `17.x`?
Hi, you can pass the priors in a list to `process_prior([prior1, prior2, ...])` (import with `from sbi.utils import process_prior`) and will get back a `MultipleIndepenent` prior.
Hi @runburg and thanks for reporting this issue. I assume you are using multi round SNPE. When sampling from the posterior we check whether the samples are within the prior...
Good point! One reason to have separate methods was that the losses differ, e.g., theta and x are swapped for NLE vs. NPE. But if we make the `self._loss(...)` sensitive...
fixed in #979