DoubleDouble.jl
DoubleDouble.jl copied to clipboard
Extended precision arithmetic for Julia (deprecated)
This allows to get updates for GitHub actions automatically. I have used this for my own packages, the [Trixi.jl framework](https://github.com/trixi-framework), and the [SciML organization](https://github.com/SciML). After merging this, you could also...
You're receiving this pull request because the now-deprecated [Julia TagBot GitHub App](https://github.com/apps/julia-tagbot) is installed for this repository. This pull request installs [TagBot as a GitHub Action](https://github.com/marketplace/actions/julia-tagbot). If this PR does...
Ref: https://discourse.julialang.org/t/package-compatibility-caps/15301
Following the lively [discussion](https://discourse.julialang.org/t/ann-higherprecision/6956) on discourse, what do you think is now the best way going forward? I think we all have the same goal: Getting a solid and fast...
Are there any plans to make this a drop-in replacement from Float64 or BigFloat? At this stage this is impossible because most of the missing basic math functions like sin,...
I'm trying to use this library but I get the following: ``` julia> Single(1.0) - Single(0.0) ERROR: - not defined for DoubleDouble.Single{Float64} Stacktrace: [1] -(::DoubleDouble.Single{Float64}, ::DoubleDouble.Single{Float64}) at ./promotion.jl:337 ```
Should fix #30.
The main virtue of this PR is that it adds exhaustive tests of `Float16`. These might be useful even after switching to an fma-based algorithm. They take too long to...
I was surprised to discover this: ```julia x = Float16(0.992) y = Float16(6.0e-8) # subnormal julia> d = Single(x)*Single(y) Double(6.0e-8, 0.0) - value: 5.9604644775390625e-08 julia> bits(widen(d.hi) + widen(d.lo)) "00110011100000000000000000000000" julia>...