JuMP.jl
JuMP.jl copied to clipboard
Debuggability of JuMP models
One area where I think we can do much more to help users is to provide tooling and documentation to help them debug and test the optimization models that they build.
I had this as one of the items in https://github.com/jump-dev/JuMP.jl/issues/2348, but I'm lifting it to a stand-alone issue.
I think what this needs is a new tutorial that goes over some common issues, and provides links to other resources.
If anyone has ideas/links, please post them below.
Motivation
@mtanneau's JuMP-dev talk discussed a lot of this, but it keeps coming up. I see a lot of common issues on Discourse where people are missing some strategies to debug things.
Prior art
It's also a topic that is surprisingly lacking in other modeling libraries.
- GAMS has a nice section: https://www.gams.com/latest/docs/UG_ExecErrPerformance.html. It's probably the most complete I could find.
- Pyomo doesn't have much https://pyomo.readthedocs.io/en/stable/model_debugging/index.html
- AMPL has a short section in Chapter 1 on "Catching errors." But it basically advises people to add
check
s to their code to validate data, and that "you may have to spend some time reviewing your formulation before you discover what is wrong." The tooling suggestion is to print out a constraint or objective to look at the data. But that doesn't scale to large problems very well. - Gurobi has
feasrelax
andIIS
support: https://www.gurobi.com/documentation/9.5/examples/diagnose_and_cope_with_inf.html - Localsolver has a short section on debugging: https://www.localsolver.com/docs/last/guidelines/debuggingamodel.html
I found this excellent paper that should be required reading for anyone who encounters a bug modelling:
- Pannell, Kingwell, and Schilizzi (1996). Debugging mathematical programming models: principles and practical strategies. Review of Marketing and Agricultural Economics, 64(1), 86--100. pdf.
It does have one great paragraph: "Just like science, debugging cannot be a cold, calculating and linear process. Both involve essential elements of inspired guesswork, hutches and sudden flashes of insight which cut through the mist."
They lay out a good framework for debugging as a table/flow-chart. We could aim to replicate that.
Other links
- https://medium.com/bcggamma/five-strategies-for-debugging-mip-models-4f715809f9f0
- http://yetanothermathprogrammingconsultant.blogspot.com/2018/08/the-best-way-to-debug-infeasible-models.html
Documentation: debugging Julia code in general
Give examples of how different common Julia errors can arise (BoundsError, MethodError, ...)
- Invalid JuMP syntax
- off-by-one index issues
- Strategies for debugging
- Simplify
- Comment lines
- Remove macros
Other resources
- https://benlauwens.github.io/ThinkJulia.jl/latest/book.html#chap21
Documentation: debugging solver issues
If solver returns something unexpected, 99% of the time there is a bug in your code
- Test with different solvers
- Double check the formulation. In most cases, if it gets something different to what you expect, it's because the formulation isn't doing what you think it's doing
Solver returns something unexpected.
- If a problem is unbounded, what are the likely causes, how to identify which variables are the problem
- If a problem is infeasible, how to diagnose
- Numerical issues
Other resources
- https://www.gurobi.com/documentation/9.5/refman/guidelines_for_numerical_i.html
- https://www.gams.com/blog/2020/07/dealing-with-models-that-are-unbounded/
Erwin has a good tactic for debugging unbounded problems: http://yetanothermathprogrammingconsultant.blogspot.com/2018/08/the-best-way-to-debug-infeasible-models.html
- Add the objective as an equality constraint to a new variable with a large bound, then look at the solution.
So
model = Model()
@variable(model, x >= 0)
@objective(model, Max, 2x + 1)
becomes
model = Model()
@variable(model, x >= 0)
@variable(model, z <= 100_000)
@objective(model, Max, z)
@constraint(model, z == 2x + 1)
Documentation: unit testing
- Write functions not scripts
- Parameterize your function
- Test on small inputs
- Validate those small inputs
- Use
@test
to ensure - Use
primal_feasibility_report
to validate (in)feasible solutions
Tooling: feasibility relaxation
Some solvers have native support, but we should be able to write this as a function in JuMP.
Tooling: how to prove your formulation is correct
I've never really seen a good answer for this, and this just stung me. My model was:
function create_model(neighbors::Dict{Int,Vector{Int}})
B = 1:500
T = 1:500
v(b, t) = (1 + b) * 0.99^t # faked
model = Model()
@variable(model, 0 <= w[B, vcat(0, T)] <= 1)
@expression(model, y[b = B, t = T], w[b, t] - w[b, t-1])
@objective(model, Max, sum(v(b, t) * y[b, t] for b in B, t in T))
@constraints(model, begin
# I typed
# [b = setdiff(B, 0), t = T], w[b, t] <= sum(w[b′, t] for b′ in neighbors[b])
# Should have been (t-1)
[b = setdiff(B, 0), t = T], w[b, t] <= sum(w[b′, t-1] for b′ in neighbors[b])
[t = T], sum(y[b, t] for b in B) <= 1
[b = B, t = T], w[b, t-1] <= w[b, t]
end)
set_upper_bound.(w[:, 0], 0.0)
fix(w[0, 0], 1.0; force = true)
return model
end
So we need some way of easily validating inputs and outputs. We have primal_feasibility_report
, so perhaps we could have some unit testing framework that let you provide sets of variable values to assert that they are (in)feasible.
@mtanneau pointed out that small test cases in unit tests can be brittle if there are multiple optimal solutions, you're using a local solver, or you're timing out early.
Finding infeasible constraints from the solver output using direct_model
has been very helpful for my debugging: https://discourse.julialang.org/t/how-to-reference-jump-constraint-from-solver-index/57865
It's not that special and the code is ugly but I wrote this function to easily print out conflicting constraints
function conflict_cons(model)
compute_conflict!(model)
error_found = false
conflict_constraint_list = ConstraintRef[]
for (F, S) in list_of_constraint_types(model)
for con in all_constraints(model, F, S)
try
if MOI.get(model, MOI.ConstraintConflictStatus(), con) == MOI.IN_CONFLICT
push!(conflict_constraint_list, con)
println(con)
end
catch
con_type = MOI.get(model, MOI.ConstraintConflictStatus(), con)
if ~error_found
@info "Only one error will be printed"
error("error found with " * string(F) * string(S) * string(con) * con_type)
error_found = true
end
end
end
end
end
I wrote this function to easily print out conflicting constraints
I use this:
"""
print_conflict!(model)
Compute and print a conflict for an infeasible `model`.
"""
function print_conflict!(model)
JuMP.compute_conflict!(model)
ctypes = list_of_constraint_types(model)
for (F, S) in ctypes
cons = all_constraints(model, F, S)
for i in eachindex(cons)
isassigned(cons, i) || continue
con = cons[i]
cst = MOI.get(model, MOI.ConstraintConflictStatus(), con)
cst == MOI.IN_CONFLICT && @info name(con) con
end
end
return nothing
end
This only prints the conflict (I usually have only a dozen constraints tops), though it could be modified easily to return the list of constraints. Obviously, printing is not very useful if constraints / variables have no name...
I don't quite remember why I included the isassigned(cons, i) || continue
. I think it's because I could sometimes declare array-shaped constraints without defining all the indices, or something like that.
It's interesting that we both wrote code to do approximately the same thing:
- Compute conflicts
- Get all constraints
- Print which are in conflict
I often have 1000s of constraints and my function mostly works fine (except where computing the conflict takes excessive time but this is normally a sign my model is a mess) although it could certainly be improved.
I also tend to name all variables and constraints as this helps a lot with debugging.
In addition a lot of the time I have a 'core' model which represents the core physical properties I care about, I then call this method for each experiment and add the things relevant for my current experiment. I find this helps as it gives me faith that I have a valid starting point. But this methodology is already described in the JuMP docs.
function create_model()
model=Model()
@variable(model, a)
@constraint(model, capacity, a ≤ 10)
return model
end
experiment1 = create_model()
@constraint(experiment1, ...)
@objective(experiment1, ...)
optimize!(experiment1)
experiment2 = create_model()
@constraint(experiment2, ...)
@objective(experiment2, ...)
optimize!(experiment2)
A relatively simple to implement strategy for debugging infeasibility is to do a sort of bisection search on a list of constraint names by turning off half, say, at a time and testing the remainder.
A relatively simple to implement strategy for debugging infeasibility is to do a sort of bisection search on a list of constraint names by turning off half, say, at a time and testing the remainder.
This is what I have done for many years, especially for classes of models where no solvers support IIS
. In a lot of cases, I actually don't need that the set of constraints to be irreducible. I just need a small-ish number of constraints to look at and reason about what is causing the infeasibility.
There's the copy_conflict
function:
julia> using JuMP, Gurobi
julia> model = Model(Gurobi.Optimizer);
julia> set_silent(model)
julia> @variable(model, 0.1 <= x <= 0.9, Int)
x
julia> @variable(model, y <= 0)
y
julia> @constraint(model, x <= y)
x - y ≤ 0.0
julia> optimize!(model)
julia> compute_conflict!(model)
julia> conflict, index_map = copy_conflict(model);
julia> print(conflict)
Feasibility
Subject to
x ≥ 0.1
x ≤ 0.9
x integer
PR to improve docs here: https://github.com/jump-dev/JuMP.jl/pull/3039
Another thing we've been using in a project to verify solutions at small scales is just to have an enumeration function for the integer variables (have to be bounded):
function min_via_enum(f, n, values = fill(0:1,n))
solutions = Iterators.product(values...)
best_val = Inf
best_sol = nothing
for sol in solutions
sol_vec = collect(sol)
val = f(sol_vec)
if best_val > val
best_val = val
best_sol = sol_vec
end
end
return best_val, best_sol
end
in the mixed-integer case, f
can optimize over the continuous variable for integer fixed
I have a start here: https://jump.dev/JuMP.jl/previews/PR3043/tutorials/getting_started/debugging/
Comments appreciated #3043
YALMIP has a great collection of debugging tips:
- https://yalmip.github.io/debugginginfeasible
- https://yalmip.github.io/debuggingunbounded
- https://yalmip.github.io/inside/debuggingnumerics/
- https://yalmip.github.io/inside/debuggingnonsymmetricsquare/
- https://yalmip.github.io/inside/debug/
- https://yalmip.github.io/naninmodel
- https://yalmip.github.io/slowmilp
- https://yalmip.github.io/fixedconstraints
Here is another reference for prior art, Math Program Inspector.
Thanks @ccoffrin : the bibliography in your link gives this nice reference to Bruce McCarl's paper here: https://ageconsearch.umn.edu/record/15553/files/30020403.pdf
I've merged a tutorial: https://jump.dev/JuMP.jl/dev/tutorials/getting_started/debugging/
From discussion with @ccoffrin, one thing we could add is a modeling suggestion to structure models so that they have complete recourse (that is, they never have an infeasible solution).
Proposed feasibility relaxation tool is here: https://github.com/jump-dev/MathOptInterface.jl/pull/1995 we'd just need some plumbing on the JuMP side to tidy it up.
MOI.Utilities.PenaltyRelaxation
has been added to MOI. Will be in the v1.11.0 release.