JuMP.jl
JuMP.jl copied to clipboard
DNMY: experimental testing for fast resolves of NLP
This is NOT safe to merge because doesn't update the expression graphs and so will break AmplNLWriter etc.
Part of #1185
This needs some changes to Ipopt to see any potential benefits.
Codecov Report
Base: 97.62% // Head: 97.62% // No change to project coverage :thumbsup:
Coverage data is based on head (
e5b5301
) compared to base (59926a9
). Patch coverage: 100.00% of modified lines in pull request are covered.
Additional details and impacted files
@@ Coverage Diff @@
## master #3018 +/- ##
=======================================
Coverage 97.62% 97.62%
=======================================
Files 32 32
Lines 4297 4297
=======================================
Hits 4195 4195
Misses 102 102
Impacted Files | Coverage Δ | |
---|---|---|
src/optimizer_interface.jl | 96.07% <100.00%> (ø) |
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.
:umbrella: View full report at Codecov.
:loudspeaker: Do you have feedback about the report comment? Let us know in this issue.
So the problem is that some AD backends might not update their expressions (or AD calls) if a parameter value is updated after initialize
:
julia> model = Model()
A JuMP Model
Feasibility problem with:
Variables: 0
Model mode: AUTOMATIC
CachingOptimizer state: NO_OPTIMIZER
Solver name: No optimizer attached.
julia> @variable(model, x)
x
julia> @NLparameter(model, p == 2)
p == 2.0
julia> @NLexpression(model, ex, p)
subexpression[1]: p
julia> @NLobjective(model, Min, x^ex)
julia> evaluator = NLPEvaluator(model)
Nonlinear.Evaluator with available features:
* :Grad
* :Jac
* :JacVec
* :Hess
* :HessVec
* :ExprGraph
julia> MOI.initialize(evaluator, [:ExprGraph])
julia> MOI.objective_expr(evaluator)
:(x[MathOptInterface.VariableIndex(1)] ^ 2.0)
julia> set_value(p, 3)
3
julia> MOI.objective_expr(evaluator)
:(x[MathOptInterface.VariableIndex(1)] ^ 2.0)
But adding a way to update a parameter in AbstractNLPEvaluator
breaks the abstraction that you can create a Nonlinear.Model
, and then convert it into an AbstractNLPEvaluator
. Now you'll have to keep the model around and check for updated parameter values.
This looks a lot like https://github.com/jump-dev/MathOptInterface.jl/pull/1901 Maybe we should have something similar for AD backends?
initialize
is essentially like final_touch
. It says "I've finished building the model, and I'm ready to set everything up."
But the problem is that we want people to be able to modify parameter values and not have to rebuild everything, but the NLPEvaluator
s don't have the concept of a parameter.
So one way to move forward with this is to update MOI.Nonlinear so that initialize
doesn't store expression graphs. Then we can special-case JuMP to avoid setting the evaluator if the backend is ReverseSparseAD
, and the previous backed was also ReverseSparseAD
.
I don't think there's a generic solution, short of completely changing how the nonlinear interface is passed from JuMP to the solver.
Part of the problem is that we have an Evaluator
which requires a fixed model. (You can't add new variables after creating the Evaluator
object.)
We could switch to some mechanism where we pass the Nonlinear.Model
object instead, but I wonder if it's simpler to just have a kwarg in optimize!
that opt-in skips the NLP update. Then people solving PF in a loop can explicitly decide that things still work if they skip updating the Evaluator every time.
This allows a loop like:
for i in 1:n
for (i,load) in data["load"]
set_value(pd_parameter[load["index"]], (1.0+(rand()-0.5)/10)*load["pd_base"])
set_value(qd_parameter[load["index"]], (1.0+(rand()-0.5)/10)*load["qd_base"])
end
if i > 1
x = all_variables(model)
x0 = value.(x)
set_start_value.(x, x0)
end
optimize!(model; _skip_nonlinear_update = i > 1)
@assert(termination_status(model) == LOCALLY_SOLVED)
@assert(primal_status(model) == FEASIBLE_POINT)
end
The problem with the previous "is the nonlinear model dirty" approach is that we'd also need to set the evaluator if the backend changed, even if the model didn't.
Closing because I think we need to pass the symbolic form to the solver to make this work.
x-ref https://github.com/jump-dev/MathOptInterface.jl/issues/1998 x-ref https://github.com/jump-dev/MathOptInterface.jl/issues/846