fail_point! does nothing unless a FailScenario exists
Perhaps I'm doing something wrong, but I have code that looks very similar to the examples, and I can't get it to panic or otherwise respond to failpoints in the environment:
The full code is in https://github.com/sourcefrog/fail-repro
main.rs is
use fail::fail_point;
fn main() {
println!("Has failpoints: {}", fail::has_failpoints());
println!(
"FAILPOINTS is {:?}",
std::env::var("FAILPOINTS").unwrap_or_default()
);
fail_point!("main");
println!("Failpoint passed");
}
When I run this:
$ FAILPOINTS=main=panic cargo +1.61 r --features fail/failpoints
Updating crates.io index
...
Running `target/debug/fail-repro`
Has failpoints: true
FAILPOINTS is "main=panic"
Failpoint passed
$ FAILPOINTS=main=print cargo +1.61 r --features fail/failpoints
Finished dev [unoptimized + debuginfo] target(s) in 0.01s
Running `target/debug/fail-repro`
Has failpoints: true
FAILPOINTS is "main=print"
Failpoint passed
In case this was broken by a later Cargo change, I tried it on both 1.76 and 1.63 and they both show the same behavior.
This is on x86_64 Linux.
Looking at the code, I think the problem is that the environment variable is only read by the FailScenario constructor, so binaries that just directly have failpoints won't ever fail.
I guess I would have expected that you could just embed fail_point! and then exercise it interactively or from tests that call the binary, by setting $FAILPOINTS: at least, the docs somewhat give this impression.
Maybe the macro could lazily load the environment variable? I got the idea that FailScenario was a convenience for test code.
At least, maybe this could be more clear in the docs. But, perhaps the division of responsibility between them could be reconsidered...
The reason I hit this is that I want to exercise failpoints in code run as a subprocess of a test. I thought setting the variable would do it, but it seems that the code under test needs to do FailScenario::setup from its main function for them to have any effect. With that added, my reproduction case passes.
It seems like it would be better to separate
- Run a test using failpoints, holding a mutex to prevent any other changes to global failpoint configuration, and allowing the test to reconfigure failpoints
- Prepare code-under-test to respond to failpoints set in the environment or by tests: ideally this would be implicit by just using
fail_point!