bazel
bazel copied to clipboard
support C++20 Modules
this PR implement the support C++20 Modules in bazel.
the design doc: https://github.com/bazelbuild/proposals/pull/354
the discussion: https://github.com/bazelbuild/bazel/discussions/19939
the demo: https://github.com/PikachuHyA/async_simple
the extra tests: https://github.com/PikachuHyA/bazel_cxx20_module_test
see https://github.com/bazelbuild/bazel/issues/4005
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
View this failed invocation of the CLA check for more information.
For the most up to date status, view the checks section at the bottom of the pull request.
This PR already got a lot of attention at Google in the group of C++ toolchain maintainers / experts. There’s a desire to have it, but no concrete/incompatible plans yet. The design would need some changes so that it’s compatible and supports Google well. (Think of easier maintenance in the future)
I’m not an expert in C++, but I will start the discussion internally and come back with possible requirements/changes when we figure out what they are.
Some people are out of office. The main discussion will start second week of November. I’ll post next update after that.
I rebase the PR to the latest master branch due to MODULE.bazel.lock conflict
Some people are out of office. The main discussion will start second week of November. I’ll post next update after that.
gentle ping :-)
CMake developer here; just tracking how modules are being implemented in various places :) .
I read through the design doc and had a few comments. Since it was already merged, I figured that here may be better; can move wherever is best though.
- Two-phase compilation is only supported by Clang. With the work ongoing to make smaller BMIs, a
.pcm → .orule may not be so feasible in the future. There may be a way to do.full.pcm → .importable.pcm /.full.pcm → .o` though? In any case, this is something the build system can hide away from the user interface pretty easily. - I notice that
referencesare not tracked in.CXXModules.jsonfiles. This was found to be necessary in CMake for MSVC where BMIs contain no transitive references to the BMIs they need. GCC still embeds them; Clang is deprecating it. This helps the reproducible case but means that the build system needs to track transitive imports to specify in the.modmapfiles when using modules. As an example, if the module import looks likeleaf → intermediate → impl → detail, the P1689 is only going to report one level at each scan (i.e.,intermediate's.ddifile won't specifydetailunless directly imported). The.CXXModules.jsonmust somehow store "I see an import ofintermediate;implanddetailneed specified as well".
a .pcm → .o rule may not be so feasible in the future.
No, clang don't have such plans (deprecating 2 phase compilation model) at least for now.
Two-phase compilation is only supported by Clang.
Yes but the story of the 2 phase compilation model seems really appealing. So the build system supporting 2-phase compilation model may be a positive advantages. And in the future, the build systems may be able to support both (or even more) compilation models and the users can make the choice.
No, clang don't have such plans (deprecating 2 phase compilation model) at least for now.
To be more precise, there may be multiple kinds of BMIs in the future and Clang may have a 3-phase with the trimmed BMI being the "interesting" bit for importers in the future, but still using the full BMI for codegen. Clang is also getting a (proper rather than "frontend does the 2-phase internally" of today) 1-phase compilation like GCC and MSVC as well.
Yes but the story of the 2 phase compilation model seems really appealing.
I agree. However, I prioritized 1-phase over 2-phase for CMake due to compiler support.
And in the future, the build systems may be able to support both (or even more) compilation models and the users can make the choice.
Agreed. However, given the simplicity of the 1-phase, I find it better for the initial implementation. There are a number of performance things that can be looked at in the future:
- only-if-changed on more minimal BMI files
- target-wide batch scanning
- grouped target-wide batch scanning
- grouped target-wide batch collation
Basically my main interest is in getting things working across the ecosystem as a baseline before we start up our ricer cars. Of course, Bazel can do as they please; I can only offer my view on things here.
This issue was filed against CMake. Unconditional redirection of clang-scan-deps may be unwise in the case of a failed scan. Maybe Bazel doesn't care given its execution strategies, but it is something to consider at least.
clang-scan-deps issue that may be of interest depending on whether the issue affects bazel or not: llvm/llvm-project#72875
Some people are out of office. The main discussion will start second week of November. I’ll post next update after that.
gentle ping :-)
This PR already got a lot of attention at Google in the group of C++ toolchain maintainers / experts. There’s a desire to have it, but no concrete/incompatible plans yet. The design would need some changes so that it’s compatible and supports Google well. (Think of easier maintenance in the future)
I’m not an expert in C++, but I will start the discussion internally and come back with possible requirements/changes when we figure out what they are.
gentle ping :-)
@comius hi, what's the fate of the patch?
Google would like to implement the support for C++20 modules in Bazel and deploy them internally, however our timeline for this is in about 2 years. Trying this out at scale is a large project we can't do soon - it requires lots of prerequisites.
The difficulty in accepting this PR is that the current design might differ from the final one we will eventually land on. We’re not even clear what new attributes will be added to cc_binary, what their semantics is and changing those later, would cause problems to the Bazel community.
Merging it when our design concerns are met might be an option. However, there’s still a potential cost of fixing things we missed. Waiting for a battle-tested implementation might be more beneficial for the community as well.
The biggest worries on our side are scalability and code style implications in very large repositories. It is something that is very expensive to fix as an afterthought, but at the same time not something we can prioritize now, because we decided to not use C++20 Modules ourselves in the next few years.
Things that are the most problematic are:
- There is a caveat with module naming. We need to consider whether we want to encourage/support 1:1 correspondence between module names and
cc_librarytarget names. That would simplify the model for users and tools (I suspect also the implementation, but that may be less important) - The scanning will eventually need to happen completely in parallel, which we believe is possible, however not trivial. Using
clang-scan-depsmight be problematic in this context. Having an implementation in Java would be easier to work with.
Because of this, we are not ready to merge this PR now.
There is a caveat with module naming. We need to consider whether we want to encourage/support 1:1 correspondence between module names and cc_library target names. That would simplify the model for users and tools (I suspect also the implementation, but that may be less important)
This may cause problems when wrapping libraries not using Bazel natively (as I understand is highly favored in Bazel-land). There's also been discussion of testing code (in their own binaries) using module from_library; to be able to access module-local symbols for testing purposes.
The scanning will eventually need to happen completely in parallel, which we believe is possible, however not trivial. Using clang-scan-deps might be problematic in this context. Having an implementation in Java would be easier to work with.
You mean file-by-file parallel? clang-scan-deps can probably do that internally if things are amenable to it. CMake does it by doing a scan per TU rather than any batching. However clang-scan-deps is probably on a time limit as embedding it into the compiler is (in my current understanding) the longer-term plan.
Waiting for a battle-tested implementation might be more beneficial for the community as well.
I find it hubris that Google thinks it is the only one that can provide a "battle-tested implementation". If Google is worried about the community heading down a path that doesn't suit it in the long run, at least don't hold up community progress. CMake's experimental gate might be suitable here:
- if a variable is set to a special value (we use UUIDs), enable access to additional behaviors
- warn (once) when any additional behavior is used
- anything gated by the experimental feature has an explicit "may change in the future" behavior contract
- any change in the behavior of the feature recycles the special value
This would allow for community experimentation and experience without committing to specific interfaces or behaviors and tying Google's hands whenever it gets around to it.
Modules are currently progressing, but are at a point where, IMO, community experience and feedback are required. We're at a point where we can start actually experimenting with best practices of structuring modules, determining how much information a BMI can/should provide, detecting bugs in compilers, and more. Leaving Bazel users in the wind waiting for Google to do its internal machinations and finally codedrop something years from now seems…counterproductive. I'd really like to have input from all kinds of projects and users into community experience instead of parts of it armchair theorizing because their tools don't actually support modules yet.
The difficulty in accepting this PR is that the current design might differ from the final one we will eventually land on. We’re not even clear what new attributes will be added to
cc_binary, what their semantics is and changing those later, would cause problems to the Bazel community.
The great thing about code, is that it can be changed.
Bazel didn't pop out fully formed. Things have been changed, deprecated, reimplemented in different ways. toolchains comes to mind.
How is this any different?
Merging it when our design concerns are met might be an option.
I like this option. Allow the community to develop this sooner than Google's timeline, but let us address your design concerns.
Google would like to implement the support for C++20 modules in Bazel and deploy them internally, however our timeline for this is in about 2 years. ... but at the same time not something we can prioritize now, because we decided to not use C++20 Modules ourselves in the next few years.
I am deeply concerned with the two-year wait you’ve specified. This delay reflects an internal decision-making process that drastically impedes the progress of the Bazel user community. The entire Bazel community is being forced to align with your internal timeline, with their current needs being disregarded. This inevitably ties the hands of the community, curtailing experimentation and the adoption of new technologies in a timely manner. The motivation of the patch is that the missing C++20 Modules support in Bazel blocks our internal uses.
Scalability and code style are indeed very important factors. That's also the reason we need code reviews. As mathstuf mentioned, it's not only Google that could achieve a battle-tested implementation. The open-source world thrives on responsiveness and iterative improvements. In the community, we should also be able to obtain an implementation that is scalable and well-designed through rigorous review processes. Tools and features improve through use, feedback, and refinements—not through prolonged periods of inaction. The community is more than capable of handling incremental changes and can be entrusted to do so.
The scanning will eventually need to happen completely in parallel, which we believe is possible, however not trivial. Using clang-scan-deps might be problematic in this context. Having an implementation in Java would be easier to work with.
Having an implementation in Java will require implementing a preprocessor. It may not be trivial. It shouldn't be bad to use compiler native scanners. Also it won't be a blocking issue if we want to implement a Java scanner some day. The scanner should be changeable by its nature. For example, there exist two implementations for collecting header file dependencies: one using Java native (see https://github.com/bazelbuild/bazel/pull/13871) and the other using compiler native.
The difficulty in accepting this PR is that the current design might differ from the final one we will eventually land on. We’re not even clear what new attributes will be added to
cc_binary, what their semantics is and changing those later, would cause problems to the Bazel community.The great thing about code, is that it can be changed. Bazel didn't pop out fully formed. Things have been changed, deprecated, reimplemented in different ways. toolchains comes to mind. How is this any different?
+1
Having an implementation in Java will require implementing a preprocessor. It may not be trivial.
Indeed as import statements guarded by #if __has_feature() must be accurately evaluated. I am not sure how anything other than the compiler itself can be expected to answer such questions reliably without whitelisting versions (and configurations!) of toolchains that are supported by an external scanner.
It shouldn't be bad to use compiler native scanners.
Agreed. Batch scanning would be an improvement for one-shot builds, but incremental/development builds may be better off with per-TU scanning commands (subject to process launch costs…things we need measurements for to actually know).
Hey @PikachuHyA,
thanks for your patience. The replies sparked some more internal discussions, and there seems to be more benefit from your implementation to Google than what was previously thought.
I don't have a green light yet, but we're considering accepting the change, under an experimental flag, that guards the additional attributes on C++ rules and the implementation. The flag would mean that both implementation and the public interface is subject to change in the future.
There's not so much objection to clang-scan-deps, so I guess you can keep it in the initial implementation.
For the sake of improving the quality of the review, do you think you could break this XXL PR into several digestible pieces? I'll take care that each piece is reviewed in a couple of business days.
From Bazel perspective the tricky part might be additional fields on C++ providers and C++ actions. Those might cause a regression, we need to benchmark it and figure out if it's small enough to be justifiable or should we do something extra to remove it.
cc @sam-mccall, @jyknight, @ilya-biryukov, @pzembrod
TL;DR The Bazel team has decided to accept this PR, I'll be doing the reviews and I'll get some help from internal C++ experts, namely @trybka.
We identified the following risks:
- increase in maintenance cost for the Bazel team
- divergent implementations in Bazel and at Google, or no implementation at Google
- newly introduced complexity in CppCompileAction
We'd like to keep the maintenance costs at minimum - Bazel team will only do reviews on PRs after the initial community review. We won't address any issues that are reported. We don't mind if the community addresses them.
We'd like to keep the change behind an experimental flag, to mitigate the risk of divergent implementations. While the change is under the experimental flag, there is no guarantee about incompatible changes. If Google does an internal implementation, we'd like it to match, to reduce maintenance costs.
We'd also like to make the change as "modular" as possible, in order to make it easier to remove the future. That might happen in an unlikely scenario, that Google doesn't implement support for the C++20 modules and that this remains the only complexity in CppCompileAction that we can't be rewritten to Starlark. In case this scenario plays out, the C++20 modules support will probably need to be implemented in a different way.
That said, we do see the benefits of this change for both the community and Google. Thank you for your contribution.
For the sake of improving the quality of the review, do you think you could break this XXL PR into several digestible pieces? I'll take care that each piece is reviewed in a couple of business days.
hi @comius , I have split this XXL PR into 6 smaller commits. Initially, I hoped to divide it into independent small patches (see https://github.com/bazelbuild/bazel/pull/22425 , https://github.com/bazelbuild/bazel/pull/22427), but that proved to be unfeasible due to dependencies between the patches (https://github.com/bazelbuild/bazel/pull/22429). Later, I plan to use stacked PRs to facilitate code review. However, stacked PRs require creating branches in the target repository first, and I'm not sure if I could be granted the necessary permissions. I've also created a demo of stacked PRs in my repository (https://github.com/PikachuHyA/bazel/pulls) as bakup.
Do you have any suggestions on code review process?
BTW. the windows CI is broken, I will fix it later.
@mathstuf Thanks for your comments.
I will make the related code changes as soon as possible.
Later, I plan to use stacked PRs to facilitate code review. However, stacked PRs require creating branches in the target repository first,
Nothing should require that; tools doing so should…work on that. It's kind of crazy to make tools not available for external contributors to projects. I believe https://stacked-git.github.io/ does most of its work locally so that at least you're not tied to any Github limitations.
hi @peakschris , (https://github.com/bazelbuild/bazel/pull/22425#issuecomment-2184906317)
thanks very much for your interest to this PR.
what is the status of this? I mocked up modules in a non-bazel environment and it looks like it could substantially improve our build times, so I'm eagerly anticipating this :-)
I have completed the one-phase compilation support for GCC, Clang, and MSVC, as well as the two-phase compilation support for Clang. I'm currently breaking this large PR into 5 smaller patches.
- https://github.com/bazelbuild/bazel/pull/22425
- https://github.com/bazelbuild/bazel/pull/22427
- https://github.com/bazelbuild/bazel/pull/22429
- https://github.com/bazelbuild/bazel/pull/22553
- https://github.com/bazelbuild/bazel/pull/22555
The first patch has already been merged.
The remaining patches need some time for review.
I am looking forward to using C++20 modules in our bazel builds :-)
Thanks. Your feedback and anticipation are really valuable to us.
I am hugely excited to use this for my own project. Congrats on your hard work and perseverance.
Thanks @PikachuHyA, I asked the question here and then moved it to the closed PR that I found. It looks like this is an absolutely mammoth task, excellent effort!
Hi PikachuHyA, Congrats on getting the first 3 PRs merged, and thanks to reviewers :-) It appears that we need to wait for PR4 to be rebased and merged before we can experiment with this, is that correct?
It appears that we need to wait for PR4 to be rebased and merged before we can experiment with this, is that correct?
Yes. If the https://github.com/bazelbuild/bazel/pull/22553 is merged, Bazel will basically support C++20 Modules.