Simmo Saan
Simmo Saan
Turns out this is not so easy to do due to how var_eq performs interprocedural analysis. It actually relies on the caller state being passed through the callee to be...
> Simply adding such an edge to the context-enhanced callgraphs is not helpful here: Inside the call graph all program points are collapsed, so as soon as `longjmp` occurs (say...
I think most of the overhead should be after solving when generating the GraphML witness, which requires ARG nodes to be lifted with call stacks. Just collecting the data for...
> The includes also differ between the different analyses: These benchmarks are combined with CIL and have no `#include`s. So even if the standard library headers differed in the bodies...
If our own stubs are included in LLoC then I would consider that a/the bug. Although I still suspect there could be more to it: the stub bodies haven't changed...
Hmm, I suppose keeping separate `CPA` maps for written and refined values could be more precise indeed. Although that doesn't seem like it should to be specific to Protection-Based Reading,...
To each one individually, probably not. I was thinking more along the lines of some kind of generic lifting with the dual `CPA`s or something. I don't have a specific...
If I remember correctly, something like 4.7GB out of 5GB limit, so relatively close. But both our allocation and the GC should be deterministic. I think GC should be triggered...
> Our interrupt based solverstats are something that comes to mind here. Indeed, that sounds plausible for making some allocations time-dependent. That theory could be tested by disabling that. >...
Here are some benchmarks extending the ones in https://github.com/goblint/analyzer/pull/1752#issuecomment-3190955902 on sv-benchmarks with witness generation disabled and HTML generation enabled. They include the runs from there, but also "-jobs" variants of...