ci: Enable standard library assertions in debug builds
Should add assertions that check preconditions of calls to standardlibrary functions.
Maybe we should put this behind an option and enable it explicitly in the CI? That way we wouldn't silently modify local Debug builds. We could also reuse the ACTS_FORCE_ASSERTIONS flag.
I'm definitely uncomfortable with adding this to RelWithDebInfo.
Hmm I added it also for clang... but actually, I do not check which stdlib is used. Maybe we should add both macros unconditionally of the CMAKE_COMPILER_ID.
Regarding the option, you mean to avoid rebuild after this PR?
@benjaminhuth I don't think we should unexpectedly enable any assertions outside of the CI, even in Debug builds. But if you gate this with ACTS_FORCE_ASSERTIONS, which we turn on in the CI, and don't make it depend on the build type, we could still get the assertions there.
Okay yeah that makes sense, ACTS_FORCE_ASSERTIONS seems a good place to do this I haven't though of.
But just out of interest @paulgessinger: To me every standard library assertion would be a violated precondition, so what would be the issue with enabling them for RelWithDebMode?
GCC documentation even recommendes them for production builds actually (https://gcc.gnu.org/wiki/LibstdcxxDebugMode)
📊: Physics performance monitoring for 6033b24f13a5b459c6accaffe35cff06c152048d
physmon summary
- ✅ Particles fatras
- ✅ Particles geant4
- ✅ Particles ttbar
- ✅ Vertices ttbar
- ✅ Truth tracking (KF)
- ✅ Truth tracking (GSF)
- ✅ Truth tracking (GX2F)
- ✅ Truth tracking (KF refit)
- ✅ Truth tracking (GSF refit)
- ✅ CKF finding performance | trackfinding | single muon | truth smeared seeding
- ✅ CKF fitting performance | trackfinding | single muon | truth smeared seeding
- ✅ CKF track summary | trackfinding | single muon | truth smeared seeding
- ✅ Seeding trackfinding | single muon | truth estimated seeding
- ✅ CKF finding performance | trackfinding | single muon | truth estimated seeding
- ✅ CKF fitting performance | trackfinding | single muon | truth estimated seeding
- ✅ CKF track summary | trackfinding | single muon | truth estimated seeding
- ✅ Seeding trackfinding | single muon | default seeding
- ✅ CKF finding performance | trackfinding | single muon | default seeding
- ✅ CKF fitting performance | trackfinding | single muon | default seeding
- ✅ CKF track summary | trackfinding | single muon | default seeding
- ✅ Seeding trackfinding | single muon | orthogonal seeding
- ✅ CKF finding performance | trackfinding | single muon | orthogonal seeding
- ✅ CKF fitting performance | trackfinding | single muon | orthogonal seeding
- ✅ CKF track summary | trackfinding | single muon | orthogonal seeding
- ✅ Seeding trackfinding | 4 muon x 50 vertices | default seeding
- ✅ CKF finding performance | trackfinding | 4 muon x 50 vertices | default seeding
- ✅ CKF fitting performance | trackfinding | 4 muon x 50 vertices | default seeding
- ✅ CKF track summary | trackfinding | 4 muon x 50 vertices | default seeding
- ✅ Ambisolver finding performance | trackfinding | 4 muon x 50 vertices | default seeding
- ✅ IVF notime | trackfinding | 4 muon x 50 vertices | default seeding
- ✅ AMVF gauss notime | trackfinding | 4 muon x 50 vertices | default seeding
- ✅ AMVF grid time | trackfinding | 4 muon x 50 vertices | default seeding
- ✅ Seeding trackfinding | ttbar with 200 pileup | default seeding
- ✅ CKF finding performance | trackfinding | ttbar with 200 pileup | default seeding
- ✅ CKF fitting performance | trackfinding | ttbar with 200 pileup | default seeding
- ✅ CKF track summary | trackfinding | ttbar with 200 pileup | default seeding
- ✅ Ambisolver finding performance | trackfinding | ttbar with 200 pileup | default seeding
- ✅ ML Ambisolver | trackfinding | ttbar with 200 pileup | default seeding
- ✅ AMVF gauss notime | trackfinding | ttbar with 200 pileup | default seeding
- ✅ AMVF grid time | trackfinding | ttbar with 200 pileup | default seeding
@benjaminhuth I don't see it being recommended, just that it's suitable to run in production. It's a HUGE change to enable e.g. bounds checking quietly. ATLAS builds with RelWithDebInfo, so we'd force any compilation unit compiled in Athena that links with ACTS to have these assertions on. That'd be super disruptive.
Yeah I agree that its a huge change, but would be a good change in my opinion. But yeah, I wasn't aware that by default acts is build in RelWithDebInfo for production builds in ATLAS. Thats of course a different story then...
Yeah I agree that its a huge change, but would be a good change in my opinion.
Sure, but there's no way we can decide this as a library. We can test with these assertions on (which is a very good idea imo), and experiments can then decide if they want them, but there's no way we can force this.
Also this does not come for free right? I guess the standard lib functions will do extra checks which are skipped otherwise. So it is a bit like with our assertions while ours might be more expensive
@andiwand I believe these are reasonably light checks, like bounds checks or smart pointer nullptr checks, which a reasonable branch predictor will probably get right most of the time (at least that's the justification for this in Rust land). So @benjaminhuth is probably right that they're not terrible for performance and are likely safe for production use.
Seems like the detray conversion code does some out-of-bounds access, which ostensibly with these flags are checked at compile-time.
/usr/include/c++/13/bits/stl_algobase.h:437:30: error: 'void* __builtin_memcpy(void*, const void*, long unsigned int)' forming offset [16, 17] is out of the bounds [0, 16] of object '<anonymous>' with type 'const double [2]' [-Werror=array-bounds=]
437 | __builtin_memmove(__result, __first, sizeof(_Tp) * _Num);
| ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
/__w/acts/acts/Plugins/Json/src/DetrayJsonHelper.cpp: In function 'std::tuple<unsigned int, std::vector<double, std::allocator<double> > > Acts::DetrayJsonHelper::maskFromBounds(const Acts::SurfaceBounds&, bool)':
/__w/acts/acts/Plugins/Json/src/DetrayJsonHelper.cpp:46:47: note: '<anonymous>' declared here
46 | boundaries = {bValues[0u], bValues[1u]};
| ^
cc1plus: all warnings being treated as errors
I'll go into this later/next week
@andiwand I believe these are reasonably light checks, like bounds checks or smart pointer
nullptrchecks, which a reasonable branch predictor will probably get right most of the time (at least that's the justification for this in Rust land). So @benjaminhuth is probably right that they're not terrible for performance and are likely safe for production use.
Right but ultimately this is something which should be measured before being deployed anywhere
@andiwand And despite all of this, it should be off-by-default.
Walkthrough
Conditional logic for compiler flags, added it is. In cmake/ActsCompilerOptions.cmake, if ACTS_FORCE_ASSERTIONS defined, -D_GLIBCXX_ASSERTIONS and -D_LIBCPP_DEBUG flags appended to cxx_flags. Existing logic for compiler types, intact it remains. Modifications to angle calculations in Core/include/Acts/EventData/detail/GenerateParameters.hpp, correct they are. Preprocessor directives in Plugins/Json/src/DetrayJsonHelper.cpp, introduced to undefine assertions. No changes to exported entities, there are.
Changes
| File | Change Summary |
|---|---|
| cmake/ActsCompilerOptions.cmake | Added conditional logic for compiler flags based on ACTS_FORCE_ASSERTIONS variable. Flags -D_GLIBCXX_ASSERTIONS and -D_LIBCPP_DEBUG added to cxx_flags. Existing compiler checks preserved. |
| Core/include/Acts/EventData/detail/GenerateParameters.hpp | Modified generateBoundDirection function to correct angle calculations. Adjusted cosThetaMin, cosThetaMax, etaMin, and etaMax derivations for proper bounds. Minor comment clarifications included. |
| Plugins/Json/src/DetrayJsonHelper.cpp | Introduced preprocessor directives to undefine _GLIBCXX_ASSERTIONS and _LIBCPP_DEBUG. Core functionality of maskFromBounds remains unchanged, with improved handling of SurfaceBounds types. |
Possibly related PRs
- #3788: The changes in this PR involve modifications to the logic of angle conversions between eta and theta, which may relate to the conditional compiler flags for assertions in the main PR, as both involve enhancing the robustness of calculations and configurations in the codebase.
Suggested labels
automerge, coderabbit
Suggested reviewers
- paulgessinger
Poem
In the land of code, new flags arise,
Assertions now dance, under clear skies.
With logic so clever, conditions align,
A compiler's delight, oh how it does shine!
Yoda approves, for changes are wise,
In the realm of CMake, new power lies! 🌟
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?
🪧 Tips
Chat
There are 3 ways to chat with CodeRabbit:
- Review comments: Directly reply to a review comment made by CodeRabbit. Example:
I pushed a fix in commit <commit_id>, please review it.Generate unit testing code for this file.Open a follow-up GitHub issue for this discussion.
- Files and specific lines of code (under the "Files changed" tab): Tag
@coderabbitaiin a new review comment at the desired location with your query. Examples:@coderabbitai generate unit testing code for this file.@coderabbitai modularize this function.
- PR comments: Tag
@coderabbitaiin a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:@coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.@coderabbitai read src/utils.ts and generate unit testing code.@coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.@coderabbitai help me debug CodeRabbit configuration file.
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.
CodeRabbit Commands (Invoked using PR comments)
@coderabbitai pauseto pause the reviews on a PR.@coderabbitai resumeto resume the paused reviews.@coderabbitai reviewto trigger an incremental review. This is useful when automatic reviews are disabled for the repository.@coderabbitai full reviewto do a full review from scratch and review all the files again.@coderabbitai summaryto regenerate the summary of the PR.@coderabbitai generate docstringsto generate docstrings for this PR. (Experiment)@coderabbitai resolveresolve all the CodeRabbit review comments.@coderabbitai configurationto show the current CodeRabbit configuration for the repository.@coderabbitai helpto get help.
Other keywords and placeholders
- Add
@coderabbitai ignoreanywhere in the PR description to prevent this PR from being reviewed. - Add
@coderabbitai summaryto generate the high-level summary at a specific location in the PR description. - Add
@coderabbitaianywhere in the PR title to generate the title automatically.
CodeRabbit Configuration File (.coderabbit.yaml)
- You can programmatically configure CodeRabbit by adding a
.coderabbit.yamlfile to the root of your repository. - Please see the configuration documentation for more information.
- If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation:
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
Documentation and Community
- Visit our Documentation for detailed information on how to use CodeRabbit.
- Join our Discord Community to get help, request features, and share feedback.
- Follow us on X/Twitter for updates and announcements.
Quality Gate passed
Issues
0 New issues
0 Accepted issues
Measures
0 Security Hotspots
66.7% Coverage on New Code
0.0% Duplication on New Code