is there a bug with zeta(n) ?
hello,
Describe the bug when you use zeta(n) into math.evaluate(...)
To Reproduce math.config({ number: "BigNumber", precision: 998, relTol: 1e-320, absTol: 1e-323 }); math.evaluate("zeta(3)") => gamma.js:89 Uncaught Error: Integer BigNumber expected at BigNumber (gamma.js:89:13) at gamma (typed-function.js:1465:22) at BigNumber (factorial.js:44:14) at factorial (typed-function.js:1462:22) at b (zeta.js:118:18) at zeta.js:135:34 at x (zeta.js:98:14) at y (zeta.js:64:12) at BigNumber (zeta.js:36:23) at zeta (typed-function.js:1465:22)
The bug you encountered is with typical javascript implementations of Math.log10(). They return a non-integer for Math.log10(1e-320) even though that value is patently an integer. Adding a Math.round() to make line 41 of zeta.js be
return Math.round(Math.abs(Math.log10(config.relTol)))
allows the computation to continue.
And I can verify that the return value is indeed correct to 320 digits, by checking against the OEIS value for zeta(3)
@josdejong: On the other hand, the returned value has many more digits, all of which are junk from the 322nd or so digit on. The result is correct to the specified relTol, but the function is also returning a lot of meaningless information, in the 600+ following essentially random digits. Should the result be truncated to the target precision before being returned? I'll wait to file the brief, easy PR for the bug nycos62 found until I hear on this, since it would be most convenient to do both at the same time.
*bonus : there is a mismatch between the precision max capacity of 998 vs max relTol: 1e-320, and max absTol: 1e-323
Well, relTol and absTol are only used for comparing numbers. If you don't test for ordering or equality -- just do arithmetic - you can perfectly well compute with the full precision afforded by decimal.js.
I actually forget why or where mathjs limits these further than the underlying decimal.js library does. @josdejong is that documented anywhere?
Thanks for looking into this Glen!
@josdejong: On the other hand, the returned value has many more digits, all of which are junk from the 322nd or so digit on. The result is correct to the specified relTol, but the function is also returning a lot of meaningless information, in the 600+ following essentially random digits. Should the result be truncated to the target precision before being returned? I'll wait to file the brief, easy PR for the bug nycos62 found until I hear on this, since it would be most convenient to do both at the same time.
Though it is ugly, I think we should be pragmatic here and accept that there can be garbage digits beyond the requested precision. I think it would be very hard if we try guarentee that there never are any garbage digits after calculations (for BigNumber, number, Fraction, etc). And it may slow down computations if we do a cleanup-garbage-digits action after every individual computation. The pragmatic solution to this problem is to use format after all calculations are done.
I actually forget why or where mathjs limits these further than the underlying decimal.js library does. @josdejong is that documented anywhere?
Maybe you mean cases like here:
https://github.com/josdejong/mathjs/blob/25da3c9f00eebb0b4a8df64c7e114156dcf82645/src/function/probability/gamma.js#L103
Or here:
https://github.com/josdejong/mathjs/blob/25da3c9f00eebb0b4a8df64c7e114156dcf82645/src/function/arithmetic/nthRoot.js#L131
The reason for doing calculations with higher precision is that when you have say two irrational numbers and represent them with a limited precision, they thus have an error. When applying calculations to these numbers, the error often grows, and you lose precision. When doing many iterations, the precision can decrease considerably. Is that what you mean?
No, I mean people write about 998 digits being the max allowed precision in mathjs. The decimal.js library, so far as I can tell, imposes no such limit. Why does mathjs? (I understand why in many cases intermediate calculations have to be done to greater precision.)
And it may slow down computations if we do a cleanup-garbage-digits action after every individual computation.
The cost of truncating to the known valid digits and the first questionable digit at the end of computing zeta would be utterly insignificant compared to the cost of computing zeta, and would prevent it from returning meaningless information along with correct information. So I am just asking if we should do this truncation in zeta, not asking if we should embark on a sweeping program of trying to eliminate all spurious digits in all return values of all functions. Personally, I would recommend truncating zeta to the digits actually calculated.
Just truncating in zeta is fine with me!
There are other numerical inaccuracies with zeta, see #3551. I think the solution goes through improving the gamma and/or lgamma functions, for which it seems we need the Bernoulli numbers, hence that PR.
Can the 320 limit be related to JavaScript numbers not supporting much smaller values? Starting at 1e-324, the number is parsed as 0. So we can't configure a relTol and absTol smaller than that.
Can the 320 limit be related to JavaScript numbers not supporting much smaller values? Starting at
1e-324, the number is parsed as0. So we can't configure arelTolandabsTolsmaller than that.
Ah right we are encoding our tolerances as doubles, and 1e-320 or so is the smallest positive double. And the roughly 1000 limit comes from the fact that π in decimal.js is recorded to a certain number of digits (although the documentation there explains that the sum of the precision of the input and output of trig functions must be nowhere more than 1000, which is a bit different).
I now recall we have discussed elsewhere changing the tolerances to logarithmic format to avoid the doubles range limit. We should also take better care with precisions -- the config.precision should be just a default, and if a decimal.js function like sin cuts off precision, we should return BigNumbers with different precisions. I will see if i can find an issue or discussion for this and file one if not.
👍
Yeah that explanation makes sense.
Maybe we can add an optional notation { relTol: { digits: 320} } alongside the existing relTol: 1e-320.
Ok, there already was #1385 for the precision limitation, and i just added #3556 for the tolerance issue. I didn't know what to put for the new parameters, so if you want to go with allowing an options object as the value of the existing parameters, go ahead and specify that in #3556. My only comment on that is that since tolerances other than 1e-NN don't really work with the current code, it might actually be easier/more sensible to transition to new tolerance parameters that are just positive integers (the "NN" part).
just a remark but why relTol and absTol are not computed directly from precision when you specify a precision ?
{ relTol: { digits: precision-4} }
{ absTol: { digits: precision-1} }
another example of precision problem :
math.config({
number: "BigNumber",
precision: 998,
relTol: 1e-320,
absTol: 1e-323
});
math.tan(math.e)
Uncaught Error: [DecimalError] Precision limit exceeded then, just by switching precision value in the same math instance after the first error occurs once, you get errors like this
math.config({
number: "BigNumber",
precision: 509,
relTol: 1e-320,
absTol: 1e-323
});
Uncaught Error: [DecimalError] Precision limit exceeded
another example of precision problem : ...
Should this example be moved to #3539 or a separate issue opened for it? It is off-topic for this issue (but the report is much appreciated overall!).
Thanks @nycos62 can you open a separate issue for this so we don't forget it in the discussion?
( sorry, I don't want to pollute this discussion but I'm quickly colliding with precision limitations because I'm trying to get the maximum number of continued fraction coefficients from real numbers. the default behavior of PARI GP is 1000 coefficients for instance, and I was dreaming of getting this greatness from math.js which is the most convenient framework I know )