UltraNest icon indicating copy to clipboard operation
UltraNest copied to clipboard

Missing documentation for results dictionary

Open jacopok opened this issue 1 year ago • 1 comments

  • UltraNest version: 3.6.4
  • Python version: 3.11.0
  • Operating System: Ubuntu 22.04

Description

I cannot seem to find any spot in the documentation where the meaning of all the various entries in the result dictionary (e.g. obtained by result = sampler.run() in the Basic usage tutorial). In the API documentation, all it says is "Yields: results (dict)"

Proposal

I'd be glad to contribute with a PR to add this reference! Here is what I think I got from the names and some guessing.

  • niter (int): number of sampler iterations (not likelihood evaluations!)
  • logz (float64): natural logarithm of the evidence $Z = \int p(d|\theta) p(\theta) \text{d}\theta$
  • logzerr (float64): $1\sigma$ error on $\log Z$ (can be safely assumed to be Gaussian)
  • logz_bs (float64): ?
  • logz_single (float64): ?
  • logzerr_tail (float64): ?
  • logzerr_bs (float64): ?
  • ess (float64): ?
  • H (float64): information gained, i.e. relative entropy from posterior to prior $2^H = \int \log(p(\theta |d) / p(\theta)) p(\theta) \text{d}\theta$ (?)
  • Herr (float64): (Gaussian) error on $H$ (?)
  • posterior (dict): summary information on the posterior marginals - a dictionary of lists each with as many items as the fit parameters, indexed as $\theta_i$ in the following:
    • mean (list): expectation value of $\theta_i$
    • stdev (list): standard deviation of $\theta_i$
    • median (list): median of $\theta_i$
    • errlo (list): one-sigma lower quantile of the marginal for $\theta_i$, i.e. $15.8655$% quantile
    • errup (list): one-sigma upper quantile of the marginal for $\theta_i$, i.e. $84.1345$% quantile
    • information_gain_bits (list): information gain from the marginal prior on $\theta_i$ to the posterior
  • weighted_samples (dict): weighted samples from the posterior, as computed during sampling, sorted by their log-likelihood value
    • upoints (ndarray): sample locations in the unit cube $[0, 1]^{d}$, where $d$ is the number of parameters - the shape is n_iter by $d$
    • points (ndarray): sample locations in the physical, user-provided space (same shape as upoints)
    • weights (ndarray): sample weights - shape n_iter, they add to 1
    • logw (ndarray): ?
    • bootstrapped_weights (ndarray): ?
    • logl (ndarray): log-likelihood values at the sample points (?)
  • samples (ndarray): re-weighted posterior samples: distributed according to $p(\theta | d)$ - these points are not sorted, and can be assumed to have been randomly shuffled (?)
  • maximum_likelihood (dict): summary information on the maximum likelihood value $\theta_{ML}$ found by the posterior exploration
    • logl (float64): value of the log-likelihood at $p(d | \theta_{ML})$
    • point (list): coordinates of $\theta_{ML}$ in the physical space
    • point_untransformed (list): coordinates of $\theta_{ML}$ in the unit cube
  • ncall (int): total number of likelihood evaluations (accepted and not)
  • paramnames (list): input parameter names
  • logzerr_single (float64): (?)
  • insertion_order_MWW_test (dict) (?)
    • independent_iterations (float)
    • converged (bool)

jacopok avatar Feb 10 '24 16:02 jacopok

Yes please. Some of it is documented at https://johannesbuchner.github.io/UltraNest/performance.html#output-files but it would be best to have it in the API docs.

JohannesBuchner avatar Feb 10 '24 20:02 JohannesBuchner

Hi @jacopok, would you be able to help out with a pull request? You'd need to add information here: https://github.com/JohannesBuchner/UltraNest/blob/master/ultranest/integrator.py#L2433

JohannesBuchner avatar May 21 '24 15:05 JohannesBuchner

Hi, sorry for disappearing! I meant to do this quickly but then other things took priority. I made a draft PR now, it's not complete yet since I had some trouble building the docs.

jacopok avatar May 23 '24 12:05 jacopok

Solved by #138

jacopok avatar May 29 '24 19:05 jacopok