herbie
herbie copied to clipboard
Incorrect optimization for x-sin(x)
Given bounds on x of -1e3 to 1e3, Herbie suggests replacing the expression with x^3/3! -x^5/5! + x^7/7!, despite the fact that in double precision, this formula only works for abs(x)<0.07.
Thanks for the bug report. At a high level, here's what I believe happened here.
Between -1e3 and 1e3, there's a lot of numbers with exponents between, say, 1e-300 and 1e-2, and relatively few between 1e-2 and 1e3. So the approximation works on "almost all" values. Herbie believes it is giving up a bit less than 1 bit (by my calculations, about 0.8 bits) by not including a branch for x > 0.07, and so Herbie decides it's not worth it.
One option for avoiding this is to specify a minimum value, like this:
:pre (or (<= 1e-6 (fabs x) 1e3) (= x 0))
You'd need to do that with FPCore input, not from the GUI. I'll bring this up in the meeting to see if we can make this clearer and mitigate it somehow.
I think the bigger problem is that max error is a lot more meaningful that mean error in a lot of cases.
Right. In other cases, mean error is more meaningful. Unfortunately there does not seem to be a single "best" cost metric. We're thinking about how to highlight this better. For what it's worth, you can pass --disable reduce:avg-error
to ask Herbie to optimize for maximum error instead, but it does not work nearly as well.
I've added this as a benchmark in #521 and we'll track it in the future. Closing the bug.
FYI I had a similar error with: log(sinh(x)/x)
Perhaps a short error analysis when deciding on a Taylor expansion (by checking the size of the highest order term when substituting the min/max of the range) could fix the problem?
Thanks for your work on Herbie, I love it!