Optimization and `push` and `pop`
The current implementation of SatML doesn't apply properly push and pop in presence of objectives. Indeed, the following input (thanks @bclement-ocp):
(set-option :produce-models true)
(set-logic ALL)
(declare-const x Int)
(assert (<= 0 x 10))
(push 1)
(maximize x)
(pop 1)
(minimize x)
(check-sat)
(get-model)
(get-objectives)
produces the output:
unknown
(
(define-fun x () Int 10)
)
(objectives
(x 10)
(x 10)
)
But we expect to get 0 for the second objective constraint on x in the first-order model and -oo in the objective model.
The problem occurs after #921 because we don't store objectives as tagged subformulae in the SAT solver but a list in the Theory module.
Note: in the response to (get-objectives), I would only expect a single objective constraint, because the first one appearing in the problem (the maximize on line 6), has effectively been popped/removed by the pop operation, and therefore when we get to the check-sat, there is only one constraint in scope (the one from line 8).
Both bugs should be solve by pushing case splits in SAT solvers. The current implementation doesn't restore completely the environment of the module Theory after a pop statement.
@bclement-ocp I am not sure the solution you suggested can works well. As I mentioned it, the objectives are registered in the theory environments. These environments are backtracked but all the objectives are registered at the decision level 0! This is the reason why we have two bugs here:
- We do not produce the best value in incremental mode.
- We continue to print objectives that are not in the context after popping.
For instance, for the following input file:
(set-option :produce-models true)
(set-logic ALL)
(declare-const x Int)
(declare-const y Int)
(assert (<= 0 x 10))
(push 1)
(maximize x)
(assert (<= 0 y 10))
(pop 1)
(minimize y)
(check-sat)
(get-objectives)
Alt-Ergo outputs:
(objectives
(x 10)
(y (- oo))
)
A very simple patch consists in saving the last theory environment in push and restore it in pop. Last week, I tried this solution and it works fine (solved both bugs) but it is awkward to save all the theory environment just because we save objectives here. We should save objectives directly in SatML as we do not plan to support optimization for FunSAT.
I am not sure the solution you suggested can works well. As I mentioned it, the objectives are registered in the theory environments. These environments are backtracked but all the objectives are registered at the decision level 0! [...] We should save objectives directly in SatML as we do not plan to support optimization for FunSAT.
I don't understand how this conflicts with the solution I proposed. For the record, my suggestion was:
- In
Satml.optimize, record the objective at the current guard level (not decision level, which is always0as you note). Do not callTh.add_objective. - Upon
push/pop, save and restore the objectives associated with the corresponding guard level. - In
pick_branch_lit, if we are not at the end ofincrem_guards, callTh.add_objectivefor all the objectives associated with the corresponding guard level (after callingmake_decisionso that the objectives are added at the appropriate decision level).
It looks to me that this should be about as simple as storing the theory environment on push / pop, only saves what we need, and is fully contained to the SatML module (does not require changes to the theory module). What am I missing?