Kristoffer Carlsson
Kristoffer Carlsson
What is special about Float64? Isn't what you saying that for any type `T`, `T(::Dual)` should "broadcast" `T` onto the real and dual part? This feels kind of similar to...
> I think the definition of Float64 in this PR is reasonable for operator overloading AD (and might be the right thing to do). > A common use case for...
Feel free to take inspiration from https://github.com/JuliaDiff/ForwardDiff.jl/pull/165.
> the current macro does not support computing the function value and gradient in a single pass. Yeah, the API in that PR should be changed to allow for this...
The gist there is approximately the same as in my old PR (https://github.com/JuliaDiff/ForwardDiff.jl/pull/165/files#diff-5632cec511f57cd4be617f25c09846cde440b8fa54d35abcfd546952ab4f25b2R116-R131).
FWIW, this increases the API surface of ForwardDiff considerably. Dual numbers are not even mentioned in the manual (except in the implementation details).
I also think this makes sense. I am a little bit worried about the implications it will have though, perhaps there isn't a better way than to go ahead and...
> but uses 2GB (!) in the compilation process...: What would a reasonable number for Julia to use to compile this according to you? You can try to lower the...
Edit: This is wrong The determinant of ``` x 0 0 0 y 0 0 a z ``` is always `xyz`. So the derivative should be ``` yz 0 0...
With Tensors.jl I get: ```jl > 3×3 Tensors.Tensor{2,3,Float64,9}: 1.0 0.0 0.0 0.0 1.0 0.0 0.0 1.0 1.0 > 3×3 Tensors.Tensor{2,3,Float64,9}: 1.0 0.0 0.0 0.0 1.0 -1.0 0.0 0.0 1.0 ```...