Vaibhav Kumar Dixit

Results 156 comments of Vaibhav Kumar Dixit

This came up when a user tried to use `ForwardDiff.derivative` in the loss function with GalacticOptim, Zygote fails in that case. On further discussion on slack and with @mcabbott's help...

@schrimpf do you plan to continue this?

So, internally we create the `fg!` and use that with Optim, so even if you are passing the separately it gets used like that https://github.com/SciML/GalacticOptim.jl/blob/42ce4320e3c2880ae49263bdd1c02da6f10b1233/src/solve/optim.jl#L91. Do you think having a...

I agree, this approach sounds good, I have also been thinking about this and SciML/Optimization.jl#191 - we should extend the thought to allow passing the choice of AD backend for...

This should work now https://github.com/SciML/Optimization.jl/blob/c62a2d71e80960fa6ea9e49ffbb4fa140e72a7c3/test/ADtests.jl#L251-L259 since SciML/Optimization.jl#342

I couldn't understand your optimisation problem. Did you intend to write. ``` model(θ,p) = θ[1]*p[1]+θ[2]*p[2] x0 = [1,1e-6] #actual parameters p = [1,1e6] data = model(x0,p) l(θ,p) = (data-model(θ,p))^2 optfunc(θ,args...)...

The issue is that you haven't specified the lower and upper bounds in `OptimizationProblem` and that is required for BBO to work, I agree the error is not very informative...

I think it looks good now, I will go over it once more later today and merge, the test failures are unrelated

It won't work out of the box well yet, there are still compat issues with some Nonconvex packages. But in case the aim is to use Ipopt you could just...