Documentation bug for TOMS 748
At least, I am pretty sure this is a documentation bug.
I asked this question on StackOverflow:
https://stackoverflow.com/q/78795676/
The TOMS 748 documentation states:
f(x) need not be uniformly increasing or decreasing on x and may have multiple roots. However, the bounds given must bracket a single root.
The responses -- and a straightforward reading of the TOMS 748 paper -- suggests that this is not correct, and that the bounds merely must bracket at least one root.
This may seem like a trivial difference, but for my application it matters a lot. Thanks.
So the recommended diff is?
the bounds given must bracket a ~~single~~ root.
That seems ok.
FWIW, we do have a cubic solver.
@NAThompson I would probably go with "...must bracket at least one root".
Thanks for the reference to the cubic solver. I believe the bracketing approaches are more appropriate for my use case, but speed is also a concern, so I will probably experiment with both.
Hmmm, I'm nervous about this, I suspect a more accurate wording would be "must bracket an odd number of roots", because an even number could be zero and would break the algorithms invariants? If you have a concrete use case (or can concoct some examples), I'll add it to the tests as well just to be sure ;)
@jzmaddock I am pretty sure "at least one" is correct (and rules out zero).
My point is that any bracketing algorithm can not tell the difference between 2 roots and zero roots, consider bisecting a function with two roots at say x=2 and x=3 with a starting range of [0,10].
Now, f(0) and f(10) have the same sign, we then examine f(5) and hey... that has the same sign as well! So which way do we jump next? There is simply no way to tell that the [0,5] interval has those two roots.
@jzmaddock
Your example does not satisfy the preconditions.
It is a precondition that a < b and that a and b bracket the root to find so that f(a) * f(b) < 0.
As far as I know, all of these "bracketing" algorithms work similarly, falling back to bisection when things get weird. They all require that f(a) and f(b) have opposite signs and that [a,b] contain at least one root.
TOMS 748 fits this description, anyway.
Note that the number of roots in [a,b] can be even if some have multiplicity. TOMS 748 (and pretty much any "bracketing" algorithm, I think) will still work.
Your example does not satisfy the preconditions.
Well.. that's why I gave it as an example, if we exclude the special cases of roots that touch but do not cross the origin, then requiring f(a)*f(b) < 0 is the same as saying "an odd number of roots". It is of course a much more accurate precondition.
Whatever, I'll tidy up the language, and add some tests on multiple roots in oscillating functions just to be sure.
Your example does not satisfy the preconditions.
Well.. that's why I gave it as an example, if we exclude the special cases of roots that touch but do not cross the origin, then requiring
f(a)*f(b) < 0is the same as saying "an odd number of roots". It is of course a much more accurate precondition.
I am sorry, but I still do not see the problem.
Why would we exclude the case of roots that touch but do not cross the origin? As far as I know, TOMS 748 imposes no such restriction.
And the requirement f(a)*f(b) < 0 is already stated in the documentation here:
https://github.com/boostorg/math/blob/boost-1.85.0/doc/html/math_toolkit/roots_noderiv/TOMS748.html#L116-L119
So I am pretty sure the documentation is fine as it is, except that "a single root" should read "at least one root".
Whatever, I'll tidy up the language, and add some tests on multiple roots in oscillating functions just to be sure.
Thank you.
The multiplicity case -- e.g. touching but not crossing the axis -- is actually somewhat interesting, since it is the sort of thing the derivative-based solvers sometimes struggle with. This is one reason I am leaning towards the bracketing solvers.
Why would we exclude the case of roots that touch but do not cross the origin? As far as I know, TOMS 748 imposes no such restriction.
It's not just derivative solvers-in this case the condition number of rootfinding is infinite. In other words, this is a property of the problem, not of the method to solve it. See Corless, A Graduate Introduction to Numerical Methods from the Viewpoint of Backward Error Analysis, equation (3.20).
As a practical matter, however, you may be right-when you have infinite ill-conditioning different methods can do better or worse-though it may be quite difficult to know if its just a particular tested case or a fundamentally better interaction of the method with the problem.