Convex.jl
Convex.jl copied to clipboard
Better public API (deprecate field access)
As pointed out in https://github.com/JuliaOpt/Convex.jl/pull/330#discussion_r341410987, it would make sense to deprecate p.optval
in favor of a function as the API for accessing the optimal value. I think it would make sense to switch to accessor functions rather than asking users to look into structs (where it's not clear what's public and what isn't).
I think that calls for a bit of thought about how the API should be designed, so we don't break it again any time soon.
For example, should we try to mimic the JuMP API so that users don't have to remember two sets of commands? (I.e. termination_status
, value
, etc?) Or maybe that's just more confusing when there's subtle differences. For example, JuMP treats matrix variables as a matrix of scalar variables, whereas Convex treats matrix variables as a single object. So in JuMP, to get the value of a matrix variable M
after solving a problem, one would use value.(M)
. But in Convex it's more natural to not broadcast the call, since M
itself is a single variable.
I'm not quite sure how to choose the API, but I'm tagging this on the v0.13 milestone because we shouldn't make a breaking release without deciding on this. I would lean towards a simple minimal set of accessor functions so we don't need any field accesses, and not changing what we don't need to, so it's easy for users to adjust (e.g. keep evaluate
).
Something related we should consider is how the user passes the solver. The problem is this: on the current MOI PR branch, if the user writes
using Convex, SDPA
opt = SDPA.Optimizer(Mode = PARAMETER_UNSTABLE_BUT_FAST)
p = ...
solve!(p, opt)
solve!(p, opt)
one gets the error
ERROR: AssertionError: MOI.is_empty(optimizer)
My understanding that setting opt = ...
is a convenient way to fix settings and parameters of an optimizer at the start, but then one may want to solve many problems or variants of a problem, and they'll run into this error (with some solvers).
This can be fixed by doing
using Convex, SDPA
opt = () -> SDPA.Optimizer(Mode = PARAMETER_UNSTABLE_BUT_FAST)
p = ...
solve!(p, opt())
solve!(p, opt())
(what I'll call the "closure API") but that may be hard to understand at first for new users. We could check MOI.is_empty
ourselves first and give an error message that redirects users to the closure API, though.
JuMP recently changed how they did this (on master at least); before you would pass with_optimizer(SDPA.Optimizer, Mode = PARAMETER_UNSTABLE_BUT_FAST)
for example, which I think essentially just stores the closure () -> SDPA.Optimizer(Mode = PARAMETER_UNSTABLE_BUT_FAST)
so that fresh optimizers can be used for each solve. This essentially just wraps the closure API I mentioned above.
This was recently changed on JuMP master with https://github.com/JuliaOpt/JuMP.jl/pull/2090 so instead you pass an unevaluated closure, e.g.
model = Model()
set_optimizer(model, SDPA.Optimizer)
or
model = Model()
set_optimizer(model, () -> SDPA.Optimizer(Mode = PARAMETER_UNSTABLE_BUT_FAST))
or just
model = Model(() -> SDPA.Optimizer(Mode = PARAMETER_UNSTABLE_BUT_FAST))
There is also a set_parameter
API to set the parameters later, but I don't think that works in this example, since I don't think Mode
is implemented as a solver specific attribute, although it could be I'm misunderstanding something there.
Moreover, you set the solver prior to the call to optimize!
; you can't pass it at that point (as I understand it). Instead, the JuMP calls the function you pass it to generate a new optimizer instance for each optimize!
call you do.
So far I see three approaches: the closure API, JuMP's old approach, or JuMP's new approach. We should choose one, document it, and add helpful errors to point users to the API we want.
Edit: just to say, I wrote about this here too: https://github.com/JuliaOpt/Convex.jl/pull/330#issuecomment-548499400. I forgot about that.
As was mentioned in https://github.com/JuliaOpt/Convex.jl/issues/354, we should consider how the users interact with the termination statuses as well. (And at least document it). It would be nice also if we could somehow add a deprecation warning for checking optimality against :Optimal but I’m not sure if that’s really possible.
I think the solution for Variable
field access is https://github.com/JuliaOpt/Convex.jl/pull/313; introducing a full AbstractVariable
interface is a bit more than we need, but I think it's quite nice and also does the job here. It also cleans up some weird edge cases with the sets
field. I'll rebase that and get it in shape.
For current Problem
field access, I think just the simple set of accessors dual_status
, primal_status
, termination_status
, objective_value
, add_constraint!
, and the helper, add_constraints!
, which are all already implemented, do the job. We could consider also empty_constraints!(p) = p.constraints = Constraint[]
, which could also be implemented for Variable
's. I think it might also be helpful to have an isoptimal(p::Problem) --> Bool
function, so the user doesn't need to load MOI for the common-case of just checking optimality.
edit: we also want an API to access dual values of constraints. I'm thinking just dual_value(c::Constraint)
, and constraints(p::Problem)
are enough.
However, now that we support MOI, we also will likely wish to also give ways to query MOI attributes. That could be left as a future feature, however, since it's not breaking.
The last thing is https://github.com/JuliaOpt/Convex.jl/issues/346#issuecomment-559259462, which I'm not sure how to handle.
Looks like https://github.com/JuliaOpt/MathOptInterface.jl/pull/1008 might help with the solver interface problem discussed above
I took the thumbs up on https://github.com/jump-dev/Convex.jl/pull/595#issuecomment-2068592878 to mean that close this as won't-fix.