MathOptInterface.jl icon indicating copy to clipboard operation
MathOptInterface.jl copied to clipboard

[Utilities] add FeasibilityRelaxation

Open odow opened this issue 1 year ago • 7 comments

Part of https://github.com/jump-dev/JuMP.jl/issues/3034. This PR explores the options we have to add a feasibility relaxation function to MOI.

The main decisions would be:

  • Should it modify in-place or create a new model?
  • What should we do for constraints which don't support in-place modification like variable bounds?

The other option, not implemented here, is some sort of Optimizer like Dualization.Optimizer. That might be a better solution, but would add more code and overhead.

odow avatar Sep 12 '22 05:09 odow

That might be a better solution, but would add more code and overhead.

Not if we use MOI.Utilities.ModelFilter

blegat avatar Sep 12 '22 11:09 blegat

Bump. Does anyone want to take a look at this?

odow avatar Sep 29 '22 00:09 odow

I like this and looks good. Any reason why is it not handling vector function-in-set's?

joaquimg avatar Sep 29 '22 02:09 joaquimg

Any reason why is it not handling vector function-in-set's?

Only that it is a little more complicated to implement. I think this should cover 99% of the requests.

odow avatar Sep 29 '22 02:09 odow

It seems weird conceptually to use attributes to (destructively) transform a model.

mlubin avatar Oct 01 '22 19:10 mlubin

Indeed, maybe MOI.set(model, MOI.Utilities.FeasibilityRelaxation(Dict(c => 2.0))) could be MOI.modify(model, c, MOI.Utilities.FeasibilityRelaxation(2.0))

blegat avatar Oct 03 '22 06:10 blegat

Yes, MOI.modify is a much better verb. I don't know why I didn't think of that.

odow avatar Oct 08 '22 04:10 odow

Any comments now that I've swapped to MOI.modify?

odow avatar Oct 17 '22 01:10 odow

  • [x] Throw warning on unsupported constraints
  • [x] Document that bounds should be rewritten as linear constraints
  • [x] Potentially renamed to PenaltyRelaxation
  • [x] Return a map between the constraint and the penalty
  • [x] Consider FeasibilityRelaxation(dict; default) or some variation of

odow avatar Oct 27 '22 18:10 odow

This works quite nicely. Here's what it looks like from JuMP:

julia> using JuMP, HiGHS

julia> model = Model(HiGHS.Optimizer);

julia> set_silent(model)

julia> @variable(model, x >= 0);

julia> @objective(model, Max, 2x + 1);

julia> @constraint(model, c, 2x - 1 <= -2);

julia> optimize!(model)

julia> function penalty_relaxation!(
           model::Model,
           penalties;
           default::Union{Nothing,Real} = 1.0,
       )
           if default !== nothing
               default = Float64(default)
           end
           moi_penalties = Dict{MOI.ConstraintIndex,Float64}(
               index(k) => Float64(v) for (k, v) in penalties
           )
           map = MOI.modify(
               backend(model),
               MOI.Utilities.PenaltyRelaxation(moi_penalties; default = default),
           )
           return Dict(
               ConstraintRef(model, k, ScalarShape()) => jump_function(model, v) for
               (k, v) in map
           )
       end
penalty_relaxation! (generic function with 2 methods)

julia> function penalty_relaxation!(model::Model; kwargs...)
           return penalty_relaxation!(model, Dict(); kwargs...)
       end
penalty_relaxation! (generic function with 2 methods)

julia> penalties = penalty_relaxation!(model; default = 2)
Dict{ConstraintRef{Model, MathOptInterface.ConstraintIndex{MathOptInterface.ScalarAffineFunction{Float64}, MathOptInterface.LessThan{Float64}}, ScalarShape}, AffExpr} with 1 entry:
  c : 2 x - _[2] ≤ -1.0 => _[2]

julia> print(model)
Max 2 x - 2 _[2] + 1
Subject to
 c : 2 x - _[2] ≤ -1.0
 x ≥ 0.0
 _[2] ≥ 0.0

julia> optimize!(model)

julia> value(penalties[c])
1.0

odow avatar Oct 27 '22 22:10 odow