Dominique
Dominique
> The code was automatically generated from an Ampl model. More precisely it was automatically generated from an Ampl model that was automatically generated from a SIF model.
Sorry, you’re right. However: ```julia julia> jac_residual(rat42_nls, x) 9×3 SparseMatrixCSC{Float64, Int64} with 27 stored entries: -5.03261e-50 5.02995e-48 -4.52696e-47 -5.14752e-77 5.1448e-75 -7.20271e-74 -8.42023e-115 8.41577e-113 -1.76731e-111 -1.37737e-152 1.37664e-150 -3.85459e-149 -3.68555e-228 -0.0 -0.0...
Me too. That's very weird modeling. For info, with JuMP: ```julia julia> model = MathOptNLPModel(rat42()) julia> obj(model, x) 9111.7101 julia> grad(model, x) 3-element Vector{Float64}: -4.494124501515864e-49 4.491747286612452e-47 -4.042572557951207e-46 ``` The JuMP/NLS...
Actually, converting the array `y` to type `T` produces the same errors.
ReverseDiff is only marginally better: ```julia julia> grad(model, xx) 3-element Vector{Float64}: -4.494124501515864e-49 NaN NaN ```
And, finally, ```julia julia> model = rat42(; gradient_backend = ADNLPModels.ZygoteADGradient) julia> grad(model, xx) 3-element Vector{Float64}: -4.494124501515864e-49 NaN NaN ```
I suppose the problem here is simply that we need problem scaling. The term `exp(1.6 + 12 * 79)` overflows. JuMP must be scaling the problem.
Why would they have different arithmetic than Julia itself?
It's not a problem. Percival is just alerting you that there is no need to use an augmented-Lagrangian method for a bound-constrained problem. You could have just called Tron directly,...
They are inspired by the AMPL functions of the same name, and should help with problem scaling.