kani
kani copied to clipboard
add an uninitialized memory access fixme test
Description of changes:
Added tests/kani/UninitializedMemory/fixme_main.rs
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 and MIT licenses.
Not sure why the CI fails while it passes on my machine :(
Since the test has fixme
in its name, compiletest
would expect that this test fails, and if it doesn't, regressions will fail (which is why the kani-fixme
suite failed).
This should fail to verify, right? Then, you can add // kani-verify-fail
after the copyright to indicate that.
It's a fixme because this test passes but shouldn't
On my local machine, I first added // kani-verify-fail
and the suite would fail saying "Expected to fail but it didn't"
So I remove the // kani-verify-fail
and now it works locally, but it fails in the CI 🤔
Aaaaaah, my bad...
It fails with or without kani-verify-fail
. It's just that the second time I ran it, the test was ignored. It fails in both cases when I use the --force-rerun
flag.
I'll add an assert(false)
to force it to fail
Is this really a fixme test though? This feels more like a test for a feature request.
Is there a way to write such a test? It is a UB that is not being detected by Kani, and that is in the work. Maybe you're right this PR should be closed
The problem is that compiletest
provides a mechanism for marking tests that are failing but should pass, but AFAIK, it doesn't provide a mechanism for marking tests that are passing, but should fail. The question is, what should we do for those tests? If the file is named with a "fixme", the regressions will fail since the test is passing. I think that's why @giltho added the assert false so that regressions pass.
It seems most intuitive to name those tests with "fixme" so that it's clear that they're not working as expected. Perhaps we can add a new designation other than "fixme" for those tests?
fixme_fail?
The problem is that
compiletest
provides a mechanism for marking tests that are failing but should pass, but AFAIK, it doesn't provide a mechanism for marking tests that are passing, but should fail.
I think compiletest
can do the latter if:
- the assert is removed.
-
// kani-verify-fail
is added -
fixme
is added to the name
Then run the regression script, not kani
or cargo-kani
. // kani-verify-fail
only has effect in compiletest
.
@adpaco-aws that's exactly what I did at first and the test appeared as "FAILED"...
Oh god... I just realised what happened.
There was a interleaved alternation of modifying the kani-fixme/...
file and the kani/...
file, only the second one should be modified... I ended up getting inconsistent results and didn't understand why 🤦
My bad, it's fixed now
Doesn't solve @celinval's question: should this be added to the repo?
Decision came down against fixme for this, but recording as part of the feature request issue: #1666