Dat3M
Dat3M copied to clipboard
Stateless syntax objects
Adds reusability to syntactic objects of the verification task, so that they may be used in a parallel environment.
Implemented:
- No instances of
Relation
orAxiom
hold information that is dependent onVerificationTask
,Context
or evenProgram
. -
RelationAnalysis
managesmaxTupleSet
andminTupleSet
. Fetch that information withrelationAnalysis.may(relation)
andrelationAnalysis.must(relation)
, respectively. -
WmmEncoder
managesencodeTupleSet
, as well as the SMT variables for tuples in a relation. In order to keep instances ofWmmEncoder
consistent with their environment (and to shorten parameter lists), they store references toVerificationTask
, (analysis-)Context
andSolverContext
. - All other
*Encoder
classes also refer to the task they are created for. They will in fact be dependent on each other:PropertyEncoder
,SymmetryEncoder
->WmmEncoder
->ProgramEncoder
.
Planned:
-
Event#exec()
-
Event#cf()
-
RegWriter#getResultRegisterExpr()
-
MemEvent#getMemAddressExpr()
-
MemEvent#getMemValueExpr()
@xeren can you solve the conflict so the CI is triggered?
In order to keep instances of WmmEncoder consistent with their environment (and to shorten parameter lists), they store references to VerificationTask, (analysis-) Context and SolverContext.
Let me tell you directly that "shortening parameter lists" is in 90% of the cases a bad trade. The reason is that you end up passing a monolithic object that contains far more than is necessary, which ends up creating unnecessarily strong dependencies.
In fact, it leads to less modular and less flexible design. For example, in your changes the configuration of WmmEncoder
now ends up also configuring SymmetryEncoder
. You cannot configure them in isolation now, although there is no reason why it shouldn't be possible.
Also PRs should got to development
and not master
.
It seems that this PR breaks a lot of tests. Also, together with Refinement it triggers some invariant exceptions (something about knowledge propagating downward).
I fixed most "easy" bugs. There are two remaining problems
- One of the Svcomp stack benchmarks causes an out of memory problem. This is due to the
ext
-Relation containing over 10 million edges which get copied during the changedRelationAnalysis
. Also, the copying happens twice, once into the may-set and once into the must-set. So we need to hold up to 30 million edges in memory, which is problematic. - The other problem are just 2 short RC11 benchmarks. The refinement procedure returns the correct result, while the assume solver does not. So this issue is related to either the analysis giving wrong results on derived relations, or the encoding to do something wrong on those derived relations.
Better implementation in #315.