MathOptInterface.jl icon indicating copy to clipboard operation
MathOptInterface.jl copied to clipboard

`MOI.modify` for quadratic terms

Open martincornejo opened this issue 4 years ago • 7 comments

Background: https://github.com/jump-dev/Dualization.jl/issues/90 and a discourse discussion

MOI.modify makes in-place modifications of the coefficients of a function. Through MOI.ScalarCoefficientChange it is possible to modify the linear terms of a ScalarQuadraticFunction, but as for now the only way to modify the quadratic components is by defining the function again.

I think it would be beneficial to extend MOI.ScalarCoefficientChange or add a new type to support the in-place modification of the quadratic coefficients.

martincornejo avatar Dec 10 '20 18:12 martincornejo

The recommended way to do this is just to set a new objective.

We supported ScalarCoefficientChange because many solvers provide an efficient way of updating a linear objective coefficient. In contrast, most solvers lack support for explicitly modifying a quadratic coefficient, or if they do, it can require an expensive update of the Q matrix.

What's the use-case?

Here is how I would implement your example from Dualization (although I haven't tested):

using JuMP, MosekTools, Dualization

λ = rand(2,2)
r = rand(2,2)
ρ = rand()

model = Model(dual_optimizer(Mosek.Optimizer))
@variable(model, X[1:2, 1:2], PSD)
@variable(model, Y[1:2, 1:2])
@variable(model, t >= 0)
@constraint(model, c[i=1:2, j=1:2], Y[i, j] + X[i, j] == r[i, j])
@constraint(model, t >= sum(Y[i, j]^2 for i = 1:2, j=1:2))
@objective(model, Min, sum(λ[i, j] * Y[i, j] + ρ * t for i=1:2, j=1:2))
optimize!(model)
# To update `r`:
set_normalized_rhs(c[1, 1], 1.0)
# To update `λ`:
set_objective_coefficient(Y[1, 1], 1.0)
# To update `ρ`: 
set_objective_coefficient(t, 1.0)

odow avatar Dec 10 '20 20:12 odow

I have a consensus-ADMM algorithm, in which for each iteration the objective function is updated (somewhat similar to my MWE). By far the most convenient would be to set a new objective but the dualization is a big performance downgrade.

Thanks for the potential workaround example, but adding additional variables and constraints might penalize the performance and is a bit overcomplicated for my use-case. If it was not of the dulize-in-every-iteration issue, I couldjust set a new objective @objective(m, Min, dot(λ, r - X) + ρ * sum(x->x^2, r - X)) without worries.

We supported ScalarCoefficientChange because many solvers provide an efficient way of updating a linear objective coefficient

Are there additional advantages? I guess JuMP benefits also from in-place modifications, independent of if the solver supports special features. I have some constraints that are expensive to build, but with set_normalized_rhs it is very easy to modify the RHS without computing everything from scratch. I can imagine a situation in which the quadratic term could be modified without having to build the whole function again.

martincornejo avatar Dec 11 '20 14:12 martincornejo

So coming back to this, I'm still hesitant to add this to MOI because we don't have a solver that efficiently supports it. One fix to resolve the dualization issue might be for Dualization.jl to support resetting the objective without having to dualize the entire problem.

odow avatar Jul 25 '22 23:07 odow

Yes, I think that is the reasonable way to continue.

martincornejo avatar Jul 26 '22 13:07 martincornejo

cc @guilhermebodin thoughts?

odow avatar Jul 26 '22 21:07 odow

Solvers can efficiently update just one term of the quadratic objective. This might be useful for progressive hedging and POI.

joaquimg avatar Jul 27 '22 03:07 joaquimg

It is possible to hack into Dualization so you could only change the objective entirely but this would be unsafe for most use cases as it will mess up many maps between the primal and dual problems. I still think that a MOI.modify would be the best way.

guilhermebodin avatar Aug 03 '22 14:08 guilhermebodin