Optimization.jl
Optimization.jl copied to clipboard
Wrap HyperHessians.jl as an AD backend
Though it only focuses on hessians, combining with a gradient call from another package and hessian from https://github.com/KristofferC/HyperHessians.jl would be interesting for second order methods.
Were there any more thoughts on how this might be best implemented? I was thinking about how to fit this into the interface since I need to use some of these methods. Maybe as an extra field e.g. in AutoForwardDiff
to allow for a different Hessian method (e.g.hyper::Bool
), though that might be a bit awkward in the code. Perhaps once your comment here https://github.com/SciML/Optimization.jl/issues/314#issuecomment-1177048027 (and Chris' following comment) is resolved down the line this would be very convenient to add.
Since it has limitations, it would be nice to have it as an option in AutoForwardDiff, like fasthes = true
. We'd also have to setup the manual seeding for sparsity.
manual seeding for sparsity
What do you mean with this?
The way the sparse coloring is done, https://book.sciml.ai/notes/09/