BXA icon indicating copy to clipboard operation
BXA copied to clipboard

JOSS review: Documentation

Open grburgess opened this issue 3 years ago • 2 comments

In the documentation several important features are listed but without explanation. It would be helpful if these were explained quantitatively with examples / comparisons:

when systematically analysing a large data-set

How does this differ from what is available in XSPEC/Sherpa?

when comparing multiple models

This is explained well for those initiated with Bayes factor model comparison, but a link to that part of the docs would be useful to make the point

when analysing low counts data-set

This claim is made here and here as well as the normal use of proper Poisson likelihoods. What specific features makes BXA shine in comparison? Can links be provided as well as comparative examples?

when you don’t want to babysit your fits

What is meant by this statement? As stated above, can examples be linked/ shown that validate this? I'm assuming it is because there is a stopping criteria in nested sampling, but is this meant to inform the user that they do not need to access the correctness of their posterior?

when you don’t want to test MCMC chains for their convergence

I suppose this could be answered with the previous question.

In general, it could be important to fully separate what is a feature or strength of BXA vs what is inherited from the package it relies on / compliments.

grburgess avatar Mar 14 '21 13:03 grburgess

linking openjournals/joss-reviews#3045

grburgess avatar Mar 14 '21 13:03 grburgess

Thank you for catching this. The paper and doc are now updated to state:

BXA shines especially

when systematically analysing a large data-set, or

when comparing multiple models

when analysing low counts data-set with realistic models

because its robust and unsupervised fitting algorithm explores even complicated parameter spaces in an automated fashion. Unlike existing approaches, the user does not need to apply problem-specific algorithm parameter tuning, initialise to good starting points and check for convergence, allowing building automated analysis pipelines.

The main point is that automatic analyses can be performed, which is difficult with the available MCMC implementations (because they need to be combined with custom initialisation and termination criteria).

While a numerical study that compares the performance of MCMC flavors+init+convergence checks and BXA on a example problem is beyond the scope of this paper/documentation, the paper text was updated to go into more detail on the differences to existing MCMC implementations. I focus on highlighting the difference in nature of the approaches (complete solution with initialisation and termination, no algorithm parameters need to be tuned to the problem). It is certainly possible to do low-count analyses also with the existing MCMC capabilities, at least with mono-modal gaussian-like posteriors.

JohannesBuchner avatar Mar 30 '21 10:03 JohannesBuchner