bevy
bevy copied to clipboard
Nward/commands error handling
Objective
Fixes: #2004
Original PR: https://github.com/bevyengine/bevy/pull/2241 Thank you very much @NathanSWard for all your hard work! ⭐ I mostly just updated their code to work with the current version of bevy.
Also this is my first bevy PR, feedback is very much appreciated :)
Changelog
Added
-
FallibleCommand
- this is for command that can potentially fail, and allow people to provide error handlers if an error occurs
-
CommandErrorHandling
- this is for function like types that can handle command errors.
-
CommandErrorHandler
- Contains handlers for typical use cases of handling errors like:
-
ignore
-
panic
-
log
- This is the default behavior instead ofpanic
. The error will be logged viaerror!
.
-
- Contains handlers for typical use cases of handling errors like:
Changed
If a command is fallible, then new return type is a Config
type.
The Config
type allows a user to optionally provide an error handler via on_err
.
Example
#[derive(Component)]
struct Marker(u32);
#[derive(Resource)]
struct Food(u32);
fn system(mut commands: Commands) {
commands.entity(e)
.insert(Marker(32))
.on_err(|error, ctx| { /*..*/ });
command
.remove_resource::<Food>()
.on_err(CommandErrorHandler::ignore);
commands
.entity(e)
.despawn()
.on_err(CommandErrorHandler::panic);
commands
.entity(e)
.insert(Marker(32))
.insert(Marker(64)); // you can also ignore the error handling and just chain regular commands
}
Welcome, new contributor!
Please make sure you've read our contributing guide and we look forward to reviewing your pull request shortly ✨
The generated examples/README.md
is out of sync with the example metadata in Cargo.toml
or the example readme template. Please run cargo run -p build-templated-pages -- update examples
to update it, and commit the file change.
The generated examples/README.md
is out of sync with the example metadata in Cargo.toml
or the example readme template. Please run cargo run -p build-templated-pages -- update examples
to update it, and commit the file change.
The generated examples/README.md
is out of sync with the example metadata in Cargo.toml
or the example readme template. Please run cargo run -p build-templated-pages -- update examples
to update it, and commit the file change.
This's anecdotal, but I've hit spurious panics in several bevy jam 4 entries and this could help remedy some of those cases (the rest would probly be handled by relations 🌈🤞), so this would be nice.
That being said, should the default behaviour change from panicking to logging? I think I'm in favour of that approach, but would it be possible to make that configurable for those that would like to default to panicking?
@SecretPocketCat There's an issue tracking commands needing to be infallible whenever possible here. General consensus seems to be that it's needed - just needs to be implemented.
@SecretPocketCat I actually took this issue because I was getting hit with that exact problem when developing my game for the bevy jam and now with my current project, on both occasions I tried to debug extensively but couldn't find where the multiple despawns happened and it was irritating, it feels like you get no control over your program. I believe that crates should never panic and give the user the ability to decide what to do with the error, after all Rust already has great support for Result
why not use it right? :)
You raise a good point about making the default behavior configurable, as @dmyyy noted it was already discussed in the issue and in the previous PR. I suggest maybe adding a feature flag that will allow to change the default behavior back to panic, what do you think?
I also see that the Needs-Benchmarking
label has been added, @hymm what do you think would be the best approach for starting to benchmark this solution?
@hymm what do you think would be the best approach for starting to benchmark this solution?
In the benches
folder there is a bunch of benchmarks. Probably should adapt at least the commands/spawn tests to show perf both with and without the error handler. We might want to do all the benches in the "commands" folder. But I'm not sure if the other command types would show anything significantly different.
How to run benchmarks
- On this pr commit, change to the
benches
folder and runcargo bench commands -- --save-baseline pr
.commands
filters to just the benchmarks with "commands" in their name. "pr" saves the results with that name. - Change to the main branch commit that this pr is based off of and run
cargo bench commands -- --save-baseline main
. - Use the
critcmp
crate to get some nice output by runningcritcmp pr main
. Copy and paste that output into github to share it.
If your system has a tendency to thermally throttle (i.e. using a laptop), you may need to do something like fixing your cpu clocks to the base clock speed and/or use eco mode to reduce noise.
You raise a good point about making the default behavior configurable, as @dmyyy noted it was already discussed in the issue and in the previous PR. I suggest maybe adding a feature flag that will allow to change the default behavior back to panic, what do you think?
Sounds reasonable, but maybe I was bringing up smt. that should be just a follow-up PR instead.
The generated examples/README.md
is out of sync with the example metadata in Cargo.toml
or the example readme template. Please run cargo run -p build-templated-pages -- update examples
to update it, and commit the file change.
The generated examples/README.md
is out of sync with the example metadata in Cargo.toml
or the example readme template. Please run cargo run -p build-templated-pages -- update examples
to update it, and commit the file change.
The generated examples/README.md
is out of sync with the example metadata in Cargo.toml
or the example readme template. Please run cargo run -p build-templated-pages -- update examples
to update it, and commit the file change.
@hymm what do you think would be the best approach for starting to benchmark this solution?
In the
benches
folder there is a bunch of benchmarks. Probably should adapt at least the commands/spawn tests to show perf both with and without the error handler. We might want to do all the benches in the "commands" folder. But I'm not sure if the other command types would show anything significantly different.How to run benchmarks
- On this pr commit, change to the
benches
folder and runcargo bench commands -- --save-baseline pr
.commands
filters to just the benchmarks with "commands" in their name. "pr" saves the results with that name.- Change to the main branch commit that this pr is based off of and run
cargo bench commands -- --save-baseline main
.- Use the
critcmp
crate to get some nice output by runningcritcmp pr main
. Copy and paste that output into github to share it.If your system has a tendency to thermally throttle (i.e. using a laptop), you may need to do something like fixing your cpu clocks to the base clock speed and/or use eco mode to reduce noise.
Thank you for the guide! I copied the spawn_commands
bench in world/commands.rs
and added non default behavior to the error handling to see the performance.
I followed your steps but the results seem a bit weird to me because in some cases it looks like the PR is faster. I might of done something wrong though.
Please take a look and would love your input :) Here are the final results:
group main pr_modified
empty_commands/0_entities 1.10 5.6±1.22ns ? ?/sec 1.00 5.0±0.12ns ? ?/sec fake_commands/2000_commands 1.00 7.3±0.23µs ? ?/sec 1.01 7.3±0.18µs ? ?/sec fake_commands/4000_commands 1.00 14.6±0.32µs ? ?/sec 1.01 14.8±0.20µs ? ?/sec fake_commands/6000_commands 1.00 24.9±0.20µs ? ?/sec 1.00 24.9±0.18µs ? ?/sec fake_commands/8000_commands 1.03 34.5±3.25µs ? ?/sec 1.00 33.5±1.17µs ? ?/sec insert_commands/insert 1.09 490.7±26.30µs ? ?/sec 1.00 448.4±36.13µs ? ?/sec insert_commands/insert_batch 1.03 475.9±33.69µs ? ?/sec 1.00 464.0±29.48µs ? ?/sec sized_commands_0_bytes/2000_commands 1.14 6.5±0.12µs ? ?/sec 1.00 5.7±0.10µs ? ?/sec sized_commands_0_bytes/4000_commands 1.17 12.9±0.09µs ? ?/sec 1.00 11.0±0.13µs ? ?/sec sized_commands_0_bytes/6000_commands 1.16 19.8±0.34µs ? ?/sec 1.00 17.0±0.05µs ? ?/sec sized_commands_0_bytes/8000_commands 1.17 28.8±0.59µs ? ?/sec 1.00 24.7±0.24µs ? ?/sec sized_commands_12_bytes/2000_commands 1.00 7.6±0.06µs ? ?/sec 1.11 8.4±0.04µs ? ?/sec sized_commands_12_bytes/4000_commands 1.00 18.5±0.36µs ? ?/sec 1.10 20.3±0.44µs ? ?/sec sized_commands_12_bytes/6000_commands 1.00 29.9±3.85µs ? ?/sec 1.02 30.4±0.07µs ? ?/sec sized_commands_12_bytes/8000_commands 1.00 37.0±0.09µs ? ?/sec 1.10 40.7±0.70µs ? ?/sec sized_commands_512_bytes/2000_commands 1.00 91.1±0.44µs ? ?/sec 1.01 91.9±2.05µs ? ?/sec sized_commands_512_bytes/4000_commands 1.00 198.6±0.89µs ? ?/sec 1.00 199.4±9.63µs ? ?/sec sized_commands_512_bytes/6000_commands 1.03 310.4±22.62µs ? ?/sec 1.00 302.1±0.47µs ? ?/sec sized_commands_512_bytes/8000_commands 1.00 406.1±0.81µs ? ?/sec 1.00 407.3±13.93µs ? ?/sec spawn_commands/2000_entities 1.00 192.1±19.55µs ? ?/sec 1.00 192.1±16.88µs ? ?/sec spawn_commands/4000_entities 1.00 384.2±39.14µs ? ?/sec 1.00 385.6±35.69µs ? ?/sec spawn_commands/6000_entities 1.02 566.3±81.25µs ? ?/sec 1.00 553.4±40.99µs ? ?/sec spawn_commands/8000_entities 1.20 887.6±159.82µs ? ?/sec 1.00 739.4±68.21µs ? ?/sec spawn_commands_error_handler/2000_entities 1.00 186.2±15.60µs ? ?/sec spawn_commands_error_handler/4000_entities 1.00 376.0±34.89µs ? ?/sec spawn_commands_error_handler/6000_entities 1.00 561.1±41.42µs ? ?/sec spawn_commands_error_handler/8000_entities 1.00 749.7±66.48µs ? ?/sec
The data looks wierd in github so here is also a screenshot for clear view: https://imgur.com/a/scLPhrc (Github embed screenshot didn't work)
could you commit the spawn_commands_error_handler
benches? I can try to run them.
could you commit the
spawn_commands_error_handler
benches? I can try to run them.
Sorry, I thought I commited those changes. Added them now. Thank you for the support :)
I ran the tests locally a few times (I did cargo clean
), everything is green. Is it possible the CI has a cache problem?
For some reason there's some difference between the errors in CI and some platforms. You should just use the error messages that CI has and accept that your local compile fail tests are going to fail.
For some reason there's some difference between the errors in CI and some platforms. You should just use the error messages that CI has and accept that your local compile fail tests are going to fail.
Good to know! I reverted the change and the CI seems to run fine now
@hymm Did you get a chance to run the benchmark yourself?
Did you get a to run the benchmark yourself?
Going to try and do it today. I ran them before without the new benches and was seeing a regression on fake commands. I'll post my results in a bit.
Did you get a to run the benchmark yourself?
Going to try and do it today. I ran them before without the new benches and was seeing a regression on fake commands. I'll post my results in a bit.
Looking forward to seeing the results, thank you! ⭐
If there's any changes it's pretty much in the noise. Ran them on main and this pr 3 times each. Depending on which run one could be faster than the other, but all the runs were in the same range. Good to see that making the error handler configurable didn't slow things down.
group main-1 pr-1
----- ------ ----
empty_commands/0_entities 1.08 3.7±0.27ns ? ?/sec 1.00 3.4±0.07ns ? ?/sec
fake_commands/2000_commands 1.03 6.0±0.10µs ? ?/sec 1.00 5.8±0.22µs ? ?/sec
fake_commands/4000_commands 1.04 12.0±0.11µs ? ?/sec 1.00 11.5±0.28µs ? ?/sec
fake_commands/6000_commands 1.03 18.0±0.18µs ? ?/sec 1.00 17.5±0.78µs ? ?/sec
fake_commands/8000_commands 1.04 24.0±0.32µs ? ?/sec 1.00 23.1±0.32µs ? ?/sec
insert_commands/insert 1.00 259.7±17.72µs ? ?/sec 1.02 263.8±16.03µs ? ?/sec
insert_commands/insert_batch 1.00 234.8±19.76µs ? ?/sec 1.02 239.5±21.89µs ? ?/sec
sized_commands_0_bytes/2000_commands 1.07 3.7±0.15µs ? ?/sec 1.00 3.5±0.28µs ? ?/sec
sized_commands_0_bytes/4000_commands 1.00 7.5±0.19µs ? ?/sec 1.01 7.5±1.63µs ? ?/sec
sized_commands_0_bytes/6000_commands 1.09 11.1±0.11µs ? ?/sec 1.00 10.2±0.55µs ? ?/sec
sized_commands_0_bytes/8000_commands 1.09 15.0±0.84µs ? ?/sec 1.00 13.8±1.16µs ? ?/sec
sized_commands_12_bytes/2000_commands 1.13 5.3±1.23µs ? ?/sec 1.00 4.7±0.21µs ? ?/sec
sized_commands_12_bytes/4000_commands 1.00 9.3±0.14µs ? ?/sec 1.03 9.6±0.52µs ? ?/sec
sized_commands_12_bytes/6000_commands 1.00 13.9±0.32µs ? ?/sec 1.01 14.0±0.21µs ? ?/sec
sized_commands_12_bytes/8000_commands 1.00 18.7±0.22µs ? ?/sec 1.02 19.0±0.49µs ? ?/sec
sized_commands_512_bytes/2000_commands 1.00 47.6±1.89µs ? ?/sec 1.09 51.7±3.07µs ? ?/sec
sized_commands_512_bytes/4000_commands 1.00 97.6±8.97µs ? ?/sec 1.06 103.1±9.70µs ? ?/sec
sized_commands_512_bytes/6000_commands 1.00 150.2±23.74µs ? ?/sec 1.06 158.8±23.11µs ? ?/sec
sized_commands_512_bytes/8000_commands 1.00 201.8±30.58µs ? ?/sec 1.05 212.5±36.55µs ? ?/sec
spawn_commands/2000_entities 1.00 151.7±11.19µs ? ?/sec 1.01 153.9±17.37µs ? ?/sec
spawn_commands/4000_entities 1.00 294.7±25.83µs ? ?/sec 1.04 306.4±34.32µs ? ?/sec
spawn_commands/6000_entities 1.00 434.7±29.80µs ? ?/sec 1.05 454.3±30.51µs ? ?/sec
spawn_commands/8000_entities 1.01 606.8±85.29µs ? ?/sec 1.00 603.1±37.72µs ? ?/sec
spawn_commands_error_handler/2000_entities 1.00 145.5±13.30µs ? ?/sec
spawn_commands_error_handler/4000_entities 1.00 326.6±65.51µs ? ?/sec
spawn_commands_error_handler/6000_entities 1.00 446.8±52.50µs ? ?/sec
spawn_commands_error_handler/8000_entities 1.00 603.7±37.74µs ? ?/sec
If there's any changes it's pretty much in the noise. Ran them on main and this pr 3 times each. Depending on which run one could be faster than the other, but all the runs were in the same range. Good to see that making the error handler configurable didn't slow things down.
group main-1 pr-1 ----- ------ ---- empty_commands/0_entities 1.08 3.7±0.27ns ? ?/sec 1.00 3.4±0.07ns ? ?/sec fake_commands/2000_commands 1.03 6.0±0.10µs ? ?/sec 1.00 5.8±0.22µs ? ?/sec fake_commands/4000_commands 1.04 12.0±0.11µs ? ?/sec 1.00 11.5±0.28µs ? ?/sec fake_commands/6000_commands 1.03 18.0±0.18µs ? ?/sec 1.00 17.5±0.78µs ? ?/sec fake_commands/8000_commands 1.04 24.0±0.32µs ? ?/sec 1.00 23.1±0.32µs ? ?/sec insert_commands/insert 1.00 259.7±17.72µs ? ?/sec 1.02 263.8±16.03µs ? ?/sec insert_commands/insert_batch 1.00 234.8±19.76µs ? ?/sec 1.02 239.5±21.89µs ? ?/sec sized_commands_0_bytes/2000_commands 1.07 3.7±0.15µs ? ?/sec 1.00 3.5±0.28µs ? ?/sec sized_commands_0_bytes/4000_commands 1.00 7.5±0.19µs ? ?/sec 1.01 7.5±1.63µs ? ?/sec sized_commands_0_bytes/6000_commands 1.09 11.1±0.11µs ? ?/sec 1.00 10.2±0.55µs ? ?/sec sized_commands_0_bytes/8000_commands 1.09 15.0±0.84µs ? ?/sec 1.00 13.8±1.16µs ? ?/sec sized_commands_12_bytes/2000_commands 1.13 5.3±1.23µs ? ?/sec 1.00 4.7±0.21µs ? ?/sec sized_commands_12_bytes/4000_commands 1.00 9.3±0.14µs ? ?/sec 1.03 9.6±0.52µs ? ?/sec sized_commands_12_bytes/6000_commands 1.00 13.9±0.32µs ? ?/sec 1.01 14.0±0.21µs ? ?/sec sized_commands_12_bytes/8000_commands 1.00 18.7±0.22µs ? ?/sec 1.02 19.0±0.49µs ? ?/sec sized_commands_512_bytes/2000_commands 1.00 47.6±1.89µs ? ?/sec 1.09 51.7±3.07µs ? ?/sec sized_commands_512_bytes/4000_commands 1.00 97.6±8.97µs ? ?/sec 1.06 103.1±9.70µs ? ?/sec sized_commands_512_bytes/6000_commands 1.00 150.2±23.74µs ? ?/sec 1.06 158.8±23.11µs ? ?/sec sized_commands_512_bytes/8000_commands 1.00 201.8±30.58µs ? ?/sec 1.05 212.5±36.55µs ? ?/sec spawn_commands/2000_entities 1.00 151.7±11.19µs ? ?/sec 1.01 153.9±17.37µs ? ?/sec spawn_commands/4000_entities 1.00 294.7±25.83µs ? ?/sec 1.04 306.4±34.32µs ? ?/sec spawn_commands/6000_entities 1.00 434.7±29.80µs ? ?/sec 1.05 454.3±30.51µs ? ?/sec spawn_commands/8000_entities 1.01 606.8±85.29µs ? ?/sec 1.00 603.1±37.72µs ? ?/sec spawn_commands_error_handler/2000_entities 1.00 145.5±13.30µs ? ?/sec spawn_commands_error_handler/4000_entities 1.00 326.6±65.51µs ? ?/sec spawn_commands_error_handler/6000_entities 1.00 446.8±52.50µs ? ?/sec spawn_commands_error_handler/8000_entities 1.00 603.7±37.74µs ? ?/sec
Yes! I'm very happy to see that, thank you for the thorough benchmarking :) How can we proceed for this PR to be reviewed and merged?
How can we proceed for this PR to be reviewed and merged?
Needs 2 approvals from ECS-SME's, since this is controversial. Community approval can be helpful too. I'm not an ECS-SME, but I'll try to review sometime in the next couple weeks.
How can we proceed for this PR to be reviewed and merged?
Needs 2 approvals from ECS-SME's, since this is controversial. Community approval can be helpful too. I'm not an ECS-SME, but I'll try to review sometime in the next couple weeks.
Oh good to know, I look forward for your review :)
Sorry for not noticing this! I will get to this in the 0.15 cycle: this is important and valuable work that I think we can build on. Not a good last minute addition though, so I'm cutting from the 0.14 milestone.