ForwardDiff.jl
ForwardDiff.jl copied to clipboard
missing method big(::Dual)
It seems like this should work. All that is needed is:
Base.big(x::Dual{T}) where {T} = Dual{T}(big(x.value), big(x.partials))
Base.big(p::Partials) = Partials(big.(p.values))
(Mentioned on discourse.)
Hi, I'd like to take this up. I have some queries though, for Eg, the Base.float(x::Dual)
is defined here as follows:
Base.float(d::Dual{T,V,N}) where {T,V,N} = convert(Dual{T,promote_type(V, Float16),N}, d)
Should I stick to the same format for Base.big(x::Dual)
too?
Thanks :)
I think the float
function is wrong (see #535), so I wouldn't follow that model.
Calling big
on the values seems like the best way to make it consistent with Base
.
I ran into this in code that tried to convert to BigFloat
. So I’m wondering whether to do big(float(x))
or float(big(x))
?
big(float(x))
should generally be faster, assuming float(x)
doesn't lose precision (e.g. x
is an integer ≤ maxintfloat
)
@stevengj is there a reason that floatmin
and floatmax
are not implemented for Dual
numbers? Just ran into that issue and found this issue.