Make `-(::BigFloat)` inherit precision of input
Fixes #52100
Currently, -(::BigFloat) returns a value with the default BigFloat precision. With this commit, -(x::BigFloat) is altered to return a value with the same precision as the input.
Example behavior in master branch:
julia> x = BigFloat(pi, precision=32);
julia> (x,-x)
(3.1415926535, -3.14159265346825122833251953125)
julia> precision(x)
32
julia> precision(-x)
256
Example behavior in this commit:
julia> x = BigFloat(pi, precision=32);
julia> (x,-x)
(3.1415926535, -3.1415926535)
julia> precision(x)
32
julia> precision(-x)
32
I agree that this makes sense, but don't think we should do it since it is unlike how all other bigfloat operations work.
I think that we should do this for all operations that accept a single BigFloat; but I don't know much about this area.
As far as I can tell, -(::BigFloat) is the only operation on one input where the user might expect precision to be carried over. The closest thing to another example I could find is modf(::BigFloat), where a user might expect the fractional part to have the same number of digits, but that would still have to be calculated.
Here is the behavior of modf in base Julia:
julia> x = BigFloat(pi, precision=32)
3.1415926535
julia> modf(x)
(0.14159265346825122833251953125, 3.0)
We could try the same fix as in -(::BigFloat):
function modf(x::BigFloat)
zint = BigFloat()
zfloat = BigFloat(; precision=_precision(x)) # Function was changed here
ccall((:mpfr_modf, libmpfr), Int32, (Ref{BigFloat}, Ref{BigFloat}, Ref{BigFloat}, MPFRRoundingMode), zint, zfloat, x, ROUNDING_MODE[])
return (zfloat, zint)
end
But using input precision doesn't resolve the issue:
julia> x = BigFloat(pi, precision=32)
3.1415926535
julia> modf(x)
(0.14159265347, 3.0)
For all the other functions, output precision could possibly be based on the input, but each would have to be calculated, so it makes sense to fall back to default precision in each case. So it might make sense to change the behavior of -(BigFloat) only, since this is the only function where the user would expect the output precision to exactly match the input.
Triage thinks this is too breaking. We could potentially do something else with bigfloat precision like using a ScopedValue for it, though.
What makes this breaking? Isn't this just happen to be one of the only cases where preserving the input precision is certain not to lose any bits?
What makes this breaking?
I don't know about the triage decision, but the precision is part of the the BigFloat interface, a user might call precision on a BigFloat value and the inconsistency could cause a bug.
We could potentially do something else with bigfloat precision like using a ScopedValue for it, though.
I think that is a must-have! My first thought, when I heard about ScopedValue was: finally BigFloat.setprecision() do ... becomes thread-safe!
Triage thinks this is too breaking.
I accept the triage decision of course, but without conviction.
At least the show of negative BigFloat values should reflect their stored precision, and not convert to the current standard precision. Implementation of that requires quite tricky workarounds, though.
Triage thinks this is too breaking.
I think precision(-(-x)) != precision(x) and -(-x) == x are jointly embarrassing.
abs would also be a similar candidate — in fact its behavior is derived from unary -. Is loosing precision more compelling?
julia> x = nextfloat(BigFloat(-1, precision=512))
-0.999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999925
julia> -x
1.0
julia> abs(x)
1.0
Perhaps even more tellingly, abs does preserve precision in some cases:
julia> abs(prevfloat(BigFloat(1, precision=512)))
0.999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999925
Always keep precision > never keep precision > sometimes keep precision
Folks at triage did say "people don't want a better answer, they want the same answer" which is reasonable. I think this (and changing precision of all ops to preserve precision) is a minor change, but nanosoldier is not going to give great results for that kind of breakage.
Still, @nanosoldier runtests()
ldexp and frexp and significand are good candidates too
The package evaluation job you requested has completed - possible new issues were detected. The full report is available.
Nanosoldier results are garbage because I triggered nanosoldier before @mbauman fixed the bug reported by @KlausC in https://github.com/JuliaLang/julia/pull/52288#pullrequestreview-1910715955
@nanosoldier runtests()
The package evaluation job you requested has completed - possible new issues were detected. The full report is available.
Nanosoldier is clean. As I said previously, that doesn't mean much in this case. But it is a thing.
Might be worth asking triage for a second look? :)
We have a bit more context now — this would be a minor change and it's not necessarily the bug fix for the printing issue. There are currently two functions that preserve precision as far as I've found: nextfloat and prevfloat.
I think that significand, frexp, and ldexp should also be added to the list. The common thread there is that they are all pretty close the floating point representation itself.
Then the interesting thing with unary minus is that abs(x) is effectively x < 0 ? -x : x.
These should probably all happen within the same release — and would require NEWS. But I think they're all well-motivated.
I concur. I think that before we discuss it on triage we should get nanosoldier results for making all those changes (together in one PR), just so we know for sure we're not totally totally breaking.
The "bugfix" label is technicality accurate in but misleading +1 for removing it.
I definitely agree on significand, frexp, and ldexp. abs and - are on the edge to me, but I definitely agree that the proposed behavior is more useful.
Ah, copysign is another abs-like generic function that relies on unary minus:
julia> copysign(prevfloat(BigFloat(1.0; precision=512)), 1)
0.999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999925
julia> copysign(prevfloat(BigFloat(1.0; precision=512)), -1)
-1.0
The same is true for flipsign.
As -a = copysign(a, -a) and abs(a) = flipsign(a, a) the cases of - and abs are a logical consequence.
Bump