Consolidating quantified field and predicate chunks
I noticed that state consolidation only merges basic chunks. I've been running into issues where if I have a lot of quantified field chunks things become very slow. In particular I have some files where a method verifies fine on it's own but it's welldefinedness can not be re-verified at the call site because the heap becomes too fragmented.
A similar issue is highlighted in #831 where if we're unlucky with the chunk ordering in removePermissions we have to "remove" some amount of permissions from a lot of unrelated chunks which obviously scales badly if we have a lot of chunks.
This PR enables merging quantified chunks (with the exception of magic wands) if their condition and arguments are the same. This seems to significantly speed up the above mentioned files (allowing the first example to verify and the example with 19 quantifiers from #831 now runs in 21 seconds) and not have a (significant) negative effect on other files. I've run the Silicon test suite and the VerCors test suite and saw no significant differences.
I did also try running Gobra on the router package from the VerifiedSCION repository which results in 1 error after ~30 minutes:
Error at: <./router/dataplane.go:937:15> Loop invariant might not be preserved. Permission to msgs[i].Mem() might not suffice.
Since I cannot find another example where this PR has a negative impact and I don't know how to get a small Viper file from Gobra to test this specific case I thought I'd submit my PR anyway.
There are also still some open questions:
- When merging singleton chunks I'm currently only keeping the singletonArguments/singletonReceiver of the most recently added chunk which might have a negative impact on chunk ordering because we cannot match syntactically on the receiver that we throw away. Would it make sense to store a set of (sequences of) singleton arguments instead?
- When merging chunks I'm also throwing away the hints since I couldn't think of a good way to merge them. In the examples I have from VerCors the hint tends to be rather useless anyway since, for example, every array access is only done by applying the
alocfunction anyway. Is it worth it to find a way to preserve these hints? - To determine if two chunks can be merged I'm checking if their condition is the same. However, the decision of what is and isn't a part of a chunk's condition is somewhat arbitrary. For example, the expression
forall i: Int :: 0 <= i && i < n ==> acc(aloc(x,i), write)would yield a chunk with the condition0 <= i && i < nand the permission valuewrite, whereas the expressionforall i: Int :: true ==> acc(aloc(x, i), 0 <= i && i < n ? write : none)would yield a chunk with the conditiontrueand the permission value0 <= i && i < n ? write : none. There is a trade-off here regarding when we do and don't want to merge chunks. In fact, if we conditionalize all the permissions (i.e. push the condition into the permission value) you could merge all chunks referring to the same field. But of course doing that would yield a single very complicated quantified heap chunk where we can't use heuristics to find a good chunk ordering anymore. Somewhere there is a good balance of merging chunks to simplify the state (i.e. instead of having two chunks containing half of the permission requiring 4 checks in removePermissions in the best case you can have one chunk containing all the permission requiring only 2 checks in the best case) and keeping chunks separate to enable reasoning about them separately. - I have not attempted to merge quantified magic wands since I felt like there would be little benefit and I wasn't sure what all the assumptions are surrounding these chunks
- I mentioned above that this PR speeds up #831 but it seem to also do this if I disable merging quantified heap chunks by returning None before the equality check in findChunk, I am not sure why. Even without merging the chunks it seems that the removePermissions calls now select the right ordering every time. For the other example file I've been testing with merging the quantified heap chunks is crucial though.
Let me know what you think of these changes and if I can improve them in some way
Hey, thanks a lot! To be honest, I don't think I was 100% certain that we never merge quantified chunks. We should definitely do that at least sometimes. I'll need a couple of days to think about this (as you said, there are a couple of points that are somewhat arbitrary where decisions need to be made); we'll either merge this or some other version of this before the next release.
I finally started looking into this, sorry this took so long. Regarding this last point:
it seem to also [speed up] if I disable merging quantified heap chunks by returning None before the equality check in findChunk, I am not sure why. Even without merging the chunks it seems that the removePermissions calls now select the right ordering every time
I was really confused by this, but it really seems to be just an artifact of the resulting chunk order after a merge (even if the merge didn't merge anything), which just happens to be the perfect order for that example.
I'll merge some recent changes into this (the debugging code I just added essentially required changes everywhere, so there's a huge amount of conflicts), do some benchmarks myself, and then merge this.
Could you maybe send me one or two of your examples with lots of QP chunks that gets sped up by this? I'm trying out different options for when to merge chunks.
Here you go: https://gist.github.com/superaxander/b7a80e5de28cbb8a4597acf72387b782
With Z3 4.8.7 on the master branch with an assertTimeout of 10 seconds this takes around 50 seconds (with a timeout of 5s it sometimes fails) and on the branch for this PR it takes 30 seconds with the same settings. This example can also be obtained on the class-by-value-2 branch here https://github.com/superaxander/vercors/tree/class-by-value-2 by running bin/vct examples/concepts/c/structs.c --backend-file-base structs.
Thanks!