llvm-mingw
llvm-mingw copied to clipboard
Use Microsoft's headers and libs? (or: why is this toolchain so much faster than upstream LLVM?)
Hi there!
After a few months of 20-second compiles on Windows that would take 1 second on Linux, I stumbled into this and immediately noticed that it doesn't exhibit the horrific half-second delay on every invocation that LLVM's official Windows Clang binaries do! I have absolutely no idea why there would be such a difference, nor any real desire to find out; if I could just drop this in and live happily ever after that would be wonderful.
Currently, the main Windows project I'm working on is written in C and using Microsoft's headers and libraries via the Visual Studio toolchain and Windows 10 SDK, which the standard Windows LLVM install picks up and uses. I did try building against MinGW's headers but the small differences were enough to cause a few errors and I'd also rather keep things as platform-native as I possibly can.
My handwavey understanding is that any random Clang configuration should be able to set the appropriate --target
(something like i686-pc-windows-msvc
for a 32-bit I think?) and maybe an -fms-extensions
or two in order to cross-compile an identical output binary if all the Microsoft stuff is in place. Unfortunately, my naive attempt to do this has failed as the compiler seems to unconditionally use its own stuff, so I thought rather than grappling blindly for hours like I usually do, I should probably just ask for help. :^)
Alternatively, on the off-chance you happen to have any insight as to why the official LLVM build is so much slower, that might still be of value. I realise its binaries are way bigger, but surely mapping them in doesn't take that much longer, right? I'm rather baffled by the whole thing but this project here has given me some hope that my compiling experience could be a little more pleasant. 😃
Thanks for your time and any insights you might have!
After a few months of 20-second compiles on Windows that would take 1 second on Linux, I stumbled into this and immediately noticed that it doesn't exhibit the horrific half-second delay on every invocation that LLVM's official Windows Clang binaries do! I have absolutely no idea why there would be such a difference, nor any real desire to find out; if I could just drop this in and live happily ever after that would be wonderful.
Currently, the main Windows project I'm working on is written in C and using Microsoft's headers and libraries via the Visual Studio toolchain and Windows 10 SDK, which the standard Windows LLVM install picks up and uses. I did try building against MinGW's headers but the small differences were enough to cause a few errors and I'd also rather keep things as platform-native as I possibly can.
My handwavey understanding is that any random Clang configuration should be able to set the appropriate
--target
(something likei686-pc-windows-msvc
for a 32-bit I think?) and maybe an-fms-extensions
or two in order to cross-compile an identical output binary if all the Microsoft stuff is in place. Unfortunately, my naive attempt to do this has failed as the compiler seems to unconditionally use its own stuff, so I thought rather than grappling blindly for hours like I usually do, I should probably just ask for help. :^)Alternatively, on the off-chance you happen to have any insight as to why the official LLVM build is so much slower, that might still be of value. I realise its binaries are way bigger, but surely mapping them in doesn't take that much longer, right? I'm rather baffled by the whole thing but this project here has given me some hope that my compiling experience could be a little more pleasant. 😃
Overall, yes, any Clang binary should in principle be able to cross compile for any target; if you call the clang binaries I ship with --target=x86_64-windows-msvc
it should almost behave just like if you'd execute a default-MSVC-targeting Clang executable. (-fms-extensions
isn't needed, that's implied by picking an MSVC target.) There's two gotchas involved though; if your code expects to call the clang-cl interface, you'd need to add --driver-mode=cl
too, to make it interpret arguments in the right form. And secondly, the Clang executables I distribute for Windows are hardcoded to default to --rtlib=compiler-rt
builtin, and for the MSVC target, there doesn't seem to be any way to reverse this hardcoded default without bailing out. But if you call it that way, it should indeed find headers the MSVC way and not use its own bundled sysroots. I just tried it, and it didn't use the bundled headers.
Then to the main problem; I haven't heard of any such issue about the official binaries being surprisingly slow. My initial hunch is that you're running them in an environment where the INCLUDE
and LIB
env variables aren't set, so on each execution, Clang has to run some amount of heuristics to detect the installation of Visual Studio and pick out working defaults from there. If you run it from within a Visual Studio Developer Prompt, where INCLUDE
and LIB
already are set up properly, does it run faster then?
(When I tried executing my mingw-targeting clang executable with --target=x86_64-windows-msvc
it didn't seem to detect the system Visual Studio installation in the same way as the official binaries though - not entirely sure what differs there, but when I ran it in an environment with INCLUDE
and LIB
set, it did find the MSVC headers.)
Thanks for replying so quickly! Let's see...
Overall, yes, any Clang binary should in principle be able to cross compile for any target; if you call the clang binaries I ship with
--target=x86_64-windows-msvc
it should almost behave just like if you'd execute a default-MSVC-targeting Clang executable.
Oh. Yeah, it seems to work now. I already don't remember what brain fart I had that would've made it not work, but that's progress anyway.
(
-fms-extensions
isn't needed, that's implied by picking an MSVC target.)
Right, makes sense.
There's two gotchas involved though; if your code expects to call the clang-cl interface, you'd need to add
--driver-mode=cl
too, to make it interpret arguments in the right form.
I don't currently use the cl interface but that's good to know, thanks.
And secondly, the Clang executables I distribute for Windows are hardcoded to default to
--rtlib=compiler-rt
builtin, and for the MSVC target, there doesn't seem to be any way to reverse this hardcoded default without bailing out.
That... might be a problem. Apparently using -msvc makes Clang start looking for libraries without a lib
prefix. I get these errors trying to link anything:
lld-link: warning: ignoring unknown argument '--as-needed'
lld-link: warning: ignoring unknown argument '-l:libunwind.so'
lld-link: warning: ignoring unknown argument '--no-as-needed'
lld-link: error: could not open 'C:/llvm-mingw/lib/clang/14.0.0/lib/windows/clang_rt.builtins-x86_64.lib': no such file or directory
I could probably fix that just by copying the .lib
s but then I'd also probably like to use the Microsoft ones so as to match (i.e. make builds reproducible with either version of Clang). I wonder, would it make sense to hardcode rtlib
only if it's a MinGW/gnu
target? Would that be relatively easy to do?
Oh, and the warnings might also be a problem: I just realised the frontend might be calling lld-link
as though it were ld.lld
which... might be a bug in LLVM? No idea.
Then to the main problem; I haven't heard of any such issue about the official binaries being surprisingly slow.
Interesting. I've experienced this for the past several versions of LLVM on two different machines and some folk I know who've helped out have had the same thing. If this turns out to be an issue that only affects some people and not others it would be amusing, I guess. I did suspect Windows Defender involvement but whitelisting and disabling it all together didn't help.
My initial hunch is that you're running them in an environment where the
INCLUDE
andLIB
env variables aren't set, so on each execution, Clang has to run some amount of heuristics to detect the installation of Visual Studio and pick out working defaults from there. If you run it from within a Visual Studio Developer Prompt, whereINCLUDE
andLIB
already are set up properly, does it run faster then?
Right, I think I thought about that at some point but I checked again and no, it makes no difference. Even just running --version
or something takes about 300-500ms, and any value of --target
also makes no difference. The actual executable is just slow to start, and the same goes for other executables like lld-link
as well (I suppose this means the compiler frontend incurs a small delay, and then every subprocess it creates for compiling and linking does as well 🥲).
(When I tried executing my mingw-targeting clang executable with
--target=x86_64-windows-msvc
it didn't seem to detect the system Visual Studio installation in the same way as the official binaries though - not entirely sure what differs there, but when I ran it in an environment withINCLUDE
andLIB
set, it did find the MSVC headers.)
Right enough, maybe that's where I went wrong before. Normally I don't go through the Visual Studio Command Prompt because Clang doesn't typically need it (and it struck me as kind of a dumb thing to have to do in general).
Now I think about it, I guess cross-compiling from Clang on another platform would only try to use INCLUDE and LIB, so perhaps there really is some slow search code getting compiled in only on native Windows hosts. But I don't see how that could take half a second, and why it'd run unconditionally even with INCLUDE and LIB already set is beyond me...
And secondly, the Clang executables I distribute for Windows are hardcoded to default to
--rtlib=compiler-rt
builtin, and for the MSVC target, there doesn't seem to be any way to reverse this hardcoded default without bailing out.That... might be a problem. Apparently using -msvc makes Clang start looking for libraries without a
lib
prefix. I get these errors trying to link anything:lld-link: warning: ignoring unknown argument '--as-needed' lld-link: warning: ignoring unknown argument '-l:libunwind.so' lld-link: warning: ignoring unknown argument '--no-as-needed' lld-link: error: could not open 'C:/llvm-mingw/lib/clang/14.0.0/lib/windows/clang_rt.builtins-x86_64.lib': no such file or directory
I could probably fix that just by copying the
.lib
s but then I'd also probably like to use the Microsoft ones so as to match (i.e. make builds reproducible with either version of Clang). I wonder, would it make sense to hardcodertlib
only if it's a MinGW/gnu
target? Would that be relatively easy to do?
Unfortunately, the way this is done right now, by setting -DCLANG_DEFAULT_RTLIB=compiler-rt
when doing the cmake configuration, it applies to any cross target. Allowing to hardcode different defaults for different cross targets would indeed be a welcome feature (I think this has been mentioned in upstream Clang too), but it's not currently possible.
When I cross compile with Clang, I don't hardcode the defaults (so that the Clang binary is usable as a regular e.g. Linux compiler too) but use a wrapper script that sets the defaults by passing in extra -target -rtlib -fuse-ld=lld
options, but for the Windows builds of my toolchain, I try to set them as builtin defaults too so that the compiler should work even if the wrapper is bypassed.
Oh, and the warnings might also be a problem: I just realised the frontend might be calling
lld-link
as though it wereld.lld
which... might be a bug in LLVM? No idea.
Those warnings stem from the fact that I've configured Clang with -DCLANG_DEFAULT_UNWINDLIB=libunwind
and I think the logic for passing unwind lib flags to the linker isn't tested for MSVC targets, because one usually don't use a separate lib for it in those configs. Adding --unwindlib=none
fixes this issue cleanly.
Unfortunately, one can't fix the hardcoded --rtlib
in the same way due to https://github.com/llvm/llvm-project/blob/fceea4e11028f4bfbafbd6893ddeb319420107d9/clang/lib/Driver/ToolChains/CommonArgs.cpp#L1528-L1537. For internal reasons, the option rtlib
defaults to RLT_Libgcc
, and this is handled as a no-op on MSVC targets, as long as no literal --rtlib
parameter is passed. But when the default is RLT_CompilerRT
, this code doesn't allow setting it back to
to the no-op RLT_Libgcc
without erroring out...
I experimented with a patch like this:
diff --git a/clang/lib/Driver/ToolChains/CommonArgs.cpp b/clang/lib/Driver/ToolChains/CommonArgs.cpp
index b9efb6b77f07..e1557fcef13a 100644
--- a/clang/lib/Driver/ToolChains/CommonArgs.cpp
+++ b/clang/lib/Driver/ToolChains/CommonArgs.cpp
@@ -1518,9 +1518,10 @@ void tools::AddRunTimeLibs(const ToolChain &TC, const Driver &D,
if (TC.getTriple().isKnownWindowsMSVCEnvironment()) {
// Issue error diagnostic if libgcc is explicitly specified
// through command line as --rtlib option argument.
- if (Args.hasArg(options::OPT_rtlib_EQ)) {
+ Arg *A = Args.getLastArg(options::OPT_rtlib_EQ);
+ if (A && A->getValue() != StringRef("platform")) {
TC.getDriver().Diag(diag::err_drv_unsupported_rtlib_for_platform)
- << Args.getLastArg(options::OPT_rtlib_EQ)->getValue() << "MSVC";
+ << A->getValue() << "MSVC";
}
} else
AddLibgcc(TC, D, CmdArgs, Args);
This allows fixing the issue with a --rtlib=platform
. I'll consider trying to discuss this upstream...
Then to the main problem; I haven't heard of any such issue about the official binaries being surprisingly slow.
Interesting. I've experienced this for the past several versions of LLVM on two different machines and some folk I know who've helped out have had the same thing. If this turns out to be an issue that only affects some people and not others it would be amusing, I guess. I did suspect Windows Defender involvement but whitelisting and disabling it all together didn't help.
My initial hunch is that you're running them in an environment where the
INCLUDE
andLIB
env variables aren't set, so on each execution, Clang has to run some amount of heuristics to detect the installation of Visual Studio and pick out working defaults from there. If you run it from within a Visual Studio Developer Prompt, whereINCLUDE
andLIB
already are set up properly, does it run faster then?Right, I think I thought about that at some point but I checked again and no, it makes no difference. Even just running
--version
or something takes about 300-500ms, and any value of--target
also makes no difference. The actual executable is just slow to start, and the same goes for other executables likelld-link
as well (I suppose this means the compiler frontend incurs a small delay, and then every subprocess it creates for compiling and linking does as well 🥲).
Ok, so if the issue is reproducible with a plain --version
, then we can rule out all such target specific behaviours, then it indeed seems like the executable just is slower.
With the official release of LLVM 13.0.1, run on a Windows Server 2019, I run time clang-cl --version
in Git Bash, and I get runtimes of around 0m0.028s
.
On a slow and underpowered Windows 10 desktop virtual machine, I do see runtimes that don't go under 0m0.188s
, but I do get pretty much identical numbers between the official release of LLVM 14.0.0 and my release of the same.
Unfortunately, the way this is done right now, by setting
-DCLANG_DEFAULT_RTLIB=compiler-rt
when doing the cmake configuration, it applies to any cross target. Allowing to hardcode different defaults for different cross targets would indeed be a welcome feature (I think this has been mentioned in upstream Clang too), but it's not currently possible.
Ahhh, a little unfortunate then. Hadn't looked into exactly how you were doing the builds so didn't know if it was a configure option or an actual patch, guess if it's a configure option then, yeah, you'd want that in LLVM itself.
Oh, and the warnings might also be a problem: I just realised the frontend might be calling
lld-link
as though it wereld.lld
which... might be a bug in LLVM? No idea.Those warnings stem from the fact that I've configured Clang with
-DCLANG_DEFAULT_UNWINDLIB=libunwind
and I think the logic for passing unwind lib flags to the linker isn't tested for MSVC targets, because one usually don't use a separate lib for it in those configs. Adding--unwindlib=none
fixes this issue cleanly.
That makes sense, and seems like an easy enough solution. Thanks!
Unfortunately, one can't fix the hardcoded
--rtlib
in the same way due to https://github.com/llvm/llvm-project/blob/fceea4e11028f4bfbafbd6893ddeb319420107d9/clang/lib/Driver/ToolChains/CommonArgs.cpp#L1528-L1537. For internal reasons, the optionrtlib
defaults toRLT_Libgcc
, and this is handled as a no-op on MSVC targets, as long as no literal--rtlib
parameter is passed. But when the default isRLT_CompilerRT
, this code doesn't allow setting it back to to the no-opRLT_Libgcc
without erroring out...
I experimented with a patch like this:
[snip]
This allows fixing the issue with a
--rtlib=platform
. I'll consider trying to discuss this upstream...
That would be absolutely fantastic.
Ok, so if the issue is reproducible with a plain
--version
, then we can rule out all such target specific behaviours, then it indeed seems like the executable just is slower.With the official release of LLVM 13.0.1, run on a Windows Server 2019, I run
time clang-cl --version
in Git Bash, and I get runtimes of around0m0.028s
.On a slow and underpowered Windows 10 desktop virtual machine, I do see runtimes that don't go under
0m0.188s
, but I do get pretty much identical numbers between the official release of LLVM 14.0.0 and my release of the same.
That's interesting. I just checked the numbers and upstream gives me about 0.2s while your build is about 0.03. I'm on Windows 10 Home, so I do wonder if somehow this is Windows version-specific, which would make me curious again as to whether this is just a Windows issue. Like I mentioned I have disabled Defender but I guess who knows which of the other million possible CreateProcess overheads might randomly decide to apply. Still very very odd that it has always affected LLVM's builds and doesn't seem to affect these builds, though.
But I guess I'm actually pretty close to being able to use this toolchain if the rtlib thing is fixed, either with a patch here or upstream (the former probably being quicker but the latter probably being a good idea in the long run). I'd still love to know what's accounting for the startup speed differences but not as much as I'd love to get on with programming :^)
By the way thanks for the help so far, I really appreciate it.
Unfortunately, the way this is done right now, by setting
-DCLANG_DEFAULT_RTLIB=compiler-rt
when doing the cmake configuration, it applies to any cross target. Allowing to hardcode different defaults for different cross targets would indeed be a welcome feature (I think this has been mentioned in upstream Clang too), but it's not currently possible.Ahhh, a little unfortunate then. Hadn't looked into exactly how you were doing the builds so didn't know if it was a configure option or an actual patch, guess if it's a configure option then, yeah, you'd want that in LLVM itself.
Yeah - I generally have a rather hard policy of not carrying local patches in my distribution - the only customization is in wrapper scripts or in the build setup itself.
Ok, so if the issue is reproducible with a plain
--version
, then we can rule out all such target specific behaviours, then it indeed seems like the executable just is slower. With the official release of LLVM 13.0.1, run on a Windows Server 2019, I runtime clang-cl --version
in Git Bash, and I get runtimes of around0m0.028s
. On a slow and underpowered Windows 10 desktop virtual machine, I do see runtimes that don't go under0m0.188s
, but I do get pretty much identical numbers between the official release of LLVM 14.0.0 and my release of the same.That's interesting. I just checked the numbers and upstream gives me about 0.2s while your build is about 0.03. I'm on Windows 10 Home, so I do wonder if somehow this is Windows version-specific, which would make me curious again as to whether this is just a Windows issue. Like I mentioned I have disabled Defender but I guess who knows which of the other million possible CreateProcess overheads might randomly decide to apply. Still very very odd that it has always affected LLVM's builds and doesn't seem to affect these builds, though.
But I guess I'm actually pretty close to being able to use this toolchain if the rtlib thing is fixed, either with a patch here or upstream (the former probably being quicker but the latter probably being a good idea in the long run). I'd still love to know what's accounting for the startup speed differences but not as much as I'd love to get on with programming :^)
If you try an older release, e.g., https://github.com/mstorsjo/llvm-mingw/releases/tag/20210423, is that equally slow as the official releases? Originally, I also had big monolithic executables, but since the following release after that, I enabled -DLLVM_LINK_LLVM_DYLIB=ON
in the build, which keeps most of the code in shared libLLVM-*.dll
and libclang-cpp.dll
and shrinks the frontend executables significantly. Unfortunately that build configuration only works when LLVM is built in a mingw setting, it doesn't work when built with MSVC or clang-cl.
Yeah - I generally have a rather hard policy of not carrying local patches in my distribution - the only customization is in wrapper scripts or in the build setup itself.
That's fairly reasonable, I wouldn't want to maintain a bunch of custom LLVM patches either...
If you try an older release, e.g., https://github.com/mstorsjo/llvm-mingw/releases/tag/20210423, is that equally slow as the official releases?
... Oh. Wow, yeah. That's slow too.
So. I guess there's an issue in specific editions of Windows, specifically when loading large static executables, and it doesn't seem to apply to large DLLs. Ugh, Windows.
I wonder if Defender is maybe still just doing something even when it's ostensibly turned off. I guess Windows Server won't have Defender running, will it? In that case I should maybe figure out a way of completely killing off the service and seeing what that does.
Originally, I also had big monolithic executables, but since the following release after that, I enabled
-DLLVM_LINK_LLVM_DYLIB=ON
in the build, which keeps most of the code in sharedlibLLVM-*.dll
andlibclang-cpp.dll
and shrinks the frontend executables significantly. Unfortunately that build configuration only works when LLVM is built in a mingw setting, it doesn't work when built with MSVC or clang-cl.
Is that just because Clang's CMake scripts don't have a case to handle that particular configuration? Or is there some wacky toolchain limitation that makes it impossible?
I just found this https://lists.llvm.org/pipermail/llvm-dev/2017-June/113925.html - guess that somewhat answers the above question.
Update on the Defender theory: I figured out how to disable it (you have to go through safe mode, seriously?) and concluded that it makes no difference. Got all the SmartScreen stuff off too. So I'm pretty sure it's unrelated to that, which I guess is a good thing.
Edit: although, another thing I noticed is if you just double-click one of these executables, there's a delay before the console pops up, not after, but they're definitely console subsystem executables, so that does imply something is happening before the program even gets started. i.e. still something Windows-y happening. The question is what the heck that might be...
Originally, I also had big monolithic executables, but since the following release after that, I enabled
-DLLVM_LINK_LLVM_DYLIB=ON
in the build, which keeps most of the code in sharedlibLLVM-*.dll
andlibclang-cpp.dll
and shrinks the frontend executables significantly. Unfortunately that build configuration only works when LLVM is built in a mingw setting, it doesn't work when built with MSVC or clang-cl.Is that just because Clang's CMake scripts don't have a case to handle that particular configuration? Or is there some wacky toolchain limitation that makes it impossible?
I just found this https://lists.llvm.org/pipermail/llvm-dev/2017-June/113925.html - guess that somewhat answers the above question.
Yeah - in MSVC build configurations, all symbols to be exported has to be annotated with dllexport attributes, or have to be listed in a def file. In mingw build configurations, if there are no explicit exports (either dllexport attributes or def file), it exports all symbols, which turns out to work fine for the dylib config. (Technically I guess it could be possible to make that work with a more elaborate script in cmake too - there's one for setting up LLVM-C.dll, but not for the full C++ API.)
FWIW I went ahead and tried to make a stripped down, MSVC-targeting build of the toolchain, have a go at https://martin.st/temp/llvm-mingw-msvc-x86_64.zip. I built this with these tweaks: https://github.com/mstorsjo/llvm-mingw/commits/msvc-toolchain
That works fine in general, but it does require you to run in an environment with INCLUDE
and LIB
already set up. The codepath for detecting a modern MSVC installation uses some COM APIs, which don't work when built in mingw mode right now. (It's fixable but requires a bunch of patching all around.) https://github.com/llvm/llvm-project/blob/release/14.x/clang/lib/Driver/ToolChains/MSVC.cpp#L43-L45
FWIW I went ahead and tried to make a stripped down, MSVC-targeting build of the toolchain, have a go at https://martin.st/temp/llvm-mingw-msvc-x86_64.zip. I built this with these tweaks: https://github.com/mstorsjo/llvm-mingw/commits/msvc-toolchain
This is awesome. It's so much faster, and it seems to produce bit-for-bit identical output. I'm so happy! Thank you.
That works fine in general, but it does require you to run in an environment with
INCLUDE
andLIB
already set up. The codepath for detecting a modern MSVC installation uses some COM APIs, which don't work when built in mingw mode right now. (It's fixable but requires a bunch of patching all around.) https://github.com/llvm/llvm-project/blob/release/14.x/clang/lib/Driver/ToolChains/MSVC.cpp#L43-L45
This creates one tiny roadblock. It hadn't occurred to me, but my project has some hostcc'd tools that run to generate some code and stuff, and then the actual end product is 32-bit (it's a plugin for other 32-bit things). So, if I just open a 32-bit developer command prompt I get a bunch of undefined symbols building the host things, unless of course I build those as 32-bit as well. This hardly really matters, and actually I guess if I really wanted to I could just export the LIB and INCLUDE paths from both environments into some little wrapper scripts, or something.
I think for now I'll quite happily just build everything 32-bit, given that it works and it's easy. In the longer run, if LLVM could get the COM stuff working one day that would be really really cool. In fact, I wouldn't mind helping with that somehow, although I'm not really sure what use I'd be. I'll happily test any builds thrown my way, for what that's worth :^)
That works fine in general, but it does require you to run in an environment with
INCLUDE
andLIB
already set up. The codepath for detecting a modern MSVC installation uses some COM APIs, which don't work when built in mingw mode right now. (It's fixable but requires a bunch of patching all around.) https://github.com/llvm/llvm-project/blob/release/14.x/clang/lib/Driver/ToolChains/MSVC.cpp#L43-L45This creates one tiny roadblock. It hadn't occurred to me, but my project has some hostcc'd tools that run to generate some code and stuff, and then the actual end product is 32-bit (it's a plugin for other 32-bit things). So, if I just open a 32-bit developer command prompt I get a bunch of undefined symbols building the host things, unless of course I build those as 32-bit as well. This hardly really matters, and actually I guess if I really wanted to I could just export the LIB and INCLUDE paths from both environments into some little wrapper scripts, or something.
Right, so previously - within the same command prompt, you build for both 32 and 64 bit, and don't have those env vars set, and then rely on Clang figuring out the right paths for both arch variants, based on what you pass as --target
?
Right, so previously - within the same command prompt, you build for both 32 and 64 bit, and don't have those env vars set, and then rely on Clang figuring out the right paths for both arch variants, based on what you pass as
--target
?
Yeah, exactly. As is fairly standard in Unix-like environments, the ideal situation would be to have a command for a host compiler, and a command for a target compiler, and have those just work.
The Microsoft Way is of course to make things require additional steps. :)
@mikesmiffy128 You could try running Process Monitor from Microsofts sysinternals tools to find out whats going on. You have to setup some filters to get rid of the noise, then you can see exactly where the time is spent. It helped me tracking down these kinds of performance problems on windows in the past.
That works fine in general, but it does require you to run in an environment with INCLUDE and LIB already set up. The codepath for detecting a modern MSVC installation uses some COM APIs, which don't work when built in mingw mode right now. (It's fixable but requires a bunch of patching all around.) https://github.com/llvm/llvm-project/blob/release/14.x/clang/lib/Driver/ToolChains/MSVC.cpp#L43-L45
Just need to define uuid to make com api work https://github.com/SquallATF/llvm-project/commit/f6046a56ce4ec318410de8201b3008b1514dd441
Those warnings stem from the fact that I've configured Clang with -DCLANG_DEFAULT_UNWINDLIB=libunwind and I think the logic for passing unwind lib flags to the linker isn't tested for MSVC targets, because one usually don't use a separate lib for it in those configs. Adding --unwindlib=none fixes this issue cleanly.
Can overload the GetUnwindLibType
method to return ToolChain::UNW_None
https://github.com/SquallATF/llvm-project/commit/771fc8eec8058130f5add1dcd9417a0f40376284
Finally, can we make mingw use libc++
compiler-rt
and unwind
by default through coding instead of cmake configuration, so as to prevent conflicts when using MSVC and Mingw drivers at the same time? At present, it seems that few people use the combination of clang and libstdc++ under the mingw platform? https://github.com/SquallATF/llvm-project/commit/f2f5479847a658152e5f328e5380aa211d386dce
That works fine in general, but it does require you to run in an environment with INCLUDE and LIB already set up. The codepath for detecting a modern MSVC installation uses some COM APIs, which don't work when built in mingw mode right now. (It's fixable but requires a bunch of patching all around.) https://github.com/llvm/llvm-project/blob/release/14.x/clang/lib/Driver/ToolChains/MSVC.cpp#L43-L45
Just need to define uuid to make com api work SquallATF/llvm-project@f6046a5
Yes, I know. I have a commit like that in my local tree.
However this didn't work as such directly out of the box; I had to fix mingw-w64 to make the Clang/LLVM COM code work there too: https://github.com/mingw-w64/mingw-w64/commit/f923f041c39abecc0690f5fe68f43b5a8a7c33cc and https://github.com/mingw-w64/mingw-w64/commit/b501632c9a5ee277502fa9e2ce78287fad3c5289
So I didn't choose to upstream that change to use the MSVC setupapi in mingw builds yet, since when that's done, LLVM no longer can be built with any of the existing stable releases of mingw-w64, but requires the latest version from git.
Those warnings stem from the fact that I've configured Clang with -DCLANG_DEFAULT_UNWINDLIB=libunwind and I think the logic for passing unwind lib flags to the linker isn't tested for MSVC targets, because one usually don't use a separate lib for it in those configs. Adding --unwindlib=none fixes this issue cleanly.
Can overload the
GetUnwindLibType
method to returnToolChain::UNW_None
SquallATF/llvm-project@771fc8e
Sure, that's looks reasonable. Don't you need to override the rtlib part too, though?
I made a patch like this - https://github.com/mstorsjo/llvm-project/commit/clang-rtlib-platform - which makes -rtlib=platform
work as intended to restore the default here. But I guess your patch would make more sense.
Finally, can we make mingw use
libc++
compiler-rt
andunwind
by default through coding instead of cmake configuration, so as to prevent conflicts when using MSVC and Mingw drivers at the same time? At present, it seems that few people use the combination of clang and libstdc++ under the mingw platform? SquallATF/llvm-project@f2f5479
I wouldn't go change that default just yet; Clang is used in both environments, and that makes it much harder to use it with an existing GCC setup.
If using config files instead of cmake defaults, like suggested in #253, this aspect would work better. (I have an old branch that implements what you suggest in #253 but I haven't tried rebasing/refreshing it lately.)