introduced a cache for `followAllReferences()` calls
This essentially eliminates any meaningful impact by followAllReferences() at all.
-D__GNUC__ --check-level=exhaustive ../lib/utils.cpp
Clang 19 652,987,030 -> ~624,476,510~ 618,089,977
followAllReferences() calls from isAliasOf() - 350,100 -> 1,581
The example from https://trac.cppcheck.net/ticket/10765#comment:4:
Clang 19 3,056,382,003 -> ~2,838,708,731~ 2,815,165,117
followAllReferences() calls from isAliasOf() - 2,592,565 -> 641
Something really weird is going on here in the UBSAN job:
Check time: cli/threadexecutor.cpp: 0.53017s
Check time: cli/processexecutor.cpp: 1.41327s
Check time: lib/addoninfo.cpp: 0.172107s
Check time: lib/analyzerinfo.cpp: 0.636273s
The timing information for cli/cmdlineparser.cpp is missing ...
Before
Check time: cli/cmdlineparser.cpp: 1091.73s
[...]
Check time: lib/checkio.cpp: 219.069s
[...]
Check time: lib/symboldatabase.cpp: 191.785s
[...]
Check time: lib/tokenize.cpp: 290.026s
After
Check time: cli/cmdlineparser.cpp: 760.299s
[...]
Check time: lib/checkio.cpp: 168.103s
[...]
Check time: lib/symboldatabase.cpp: 145.913s
[...]
Check time: lib/tokenize.cpp: 236.561s
Could we merge this without the debug integration for now (I will file a ticket about doing that)? I would like to clean up and test the existing debug output first (see #7258 - as usual stalled) before I add new data. And it would greatly speed up the CI as well as giving a baseline to compare against if running it as a pass would have significant performance impact.
This should be done in the symboldatabase either before or during exprids.
Thanks for the feedback. Some of the shortcomings were intentional since it seemed to make things simpler and I wanted to get the performance improvement in. But I remembered the changes to be simpler which is not the case so I need a bit to get into it again.
@pfultz2 How should the information be presented in the debug output? I also assume it should be a separate section.
This should be done in the symboldatabase either before or during exprids.
In the limited cases I tested that produced considerably more calls to followAllReferences() but did not affect the overall performance much. This still needs to be tested with the selfcheck.
Given the improvement this change provides and how it affects the CI (we are actually getting even slower and not having this applied makes it harder to pinpoint the other hot spots) I would really prefer if we could just merge the very first version and do the adjustments as a follow-ups.
Having this done in incremental commits would also make it possible to bisect differences in behavior/performance this might have based on the stage this cache is generated. Especially after I tried to add a different cache in #7573 which relies on that the results are not being cached beyond a point (which might also cause issues here if this is stored too early - but might not be reflected in our tests).
I dropped the changes to generate these beforehand as it was too unspecific. The performance gains are just too big to delay this any longer. I will file a ticket and publish a PR with the WIP code later.
And I would really prefer that to land it in separate commits anyways so it will make it easier to distinguish possible issues caused by adding the cache and by precomputation of the cache.
I am not also confident the data might actually correct when precomputated after working on #7573 (I am trying to improving guardrails so we could detect that).
I also confirmed that adding the cache shows no differences with #7800.
The performance gains are just too big to delay this any longer.
It looks really nice!
Did you measure in Windows also? Do you have access to a Windows computer? I would expect that you get performance gain in windows also, just checking.
As far as I know we have had a focus on Linux and I fear it might have hurt Windows experience. I am talking with a customer, they say it takes 40 minutes for them to scan their project in Linux and 17-18 hours in Windows, using similar hardware. Some more info:
- it is valueflow that takes most time.
- single threaded analysis is not that bad as I understand it. It's the multithreaded analysis that is much slower. However it is ValueFlow that is very slow and as far as I know it does not rely on disk i/o nor mutexes (except Settings::terminated()).
so well it might be an idea to start measuring performance both with -j1 and -jX.
I can also reproduce that Cppcheck is much slower in windows than in Linux.
As far as I know we have had a focus on Linux and I fear it might have hurt Windows experience.
Official Windows release binaries are still built without Boost - see https://trac.cppcheck.net/ticket/13823. All pieces are in place and the only thing left was a step which downloads it and only extracts it partially (will have a look later). Are the Linux binaries built with Boost?
- single threaded analysis is not that bad as I understand it. It's the multithreaded analysis that is much slower. However it is ValueFlow that is very slow and as far as I know it does not rely on disk i/o nor mutexes (except Settings::terminated()).
That should only be the case if you have a lot of very small files which generates a lot of output (i.e. debug warnings enabled) when using the process executor (caused by the IPC). Multi-threaded being slower than single-threaded with a long running analysis seems impossible to me. The worst case should be the longest running file in a multi-threaded analysis.
And since Windows has no process executor that does not even apply.
From my own experience I would assume be that there is an issue in the configuration leading to Windows having to process more code from includes than Linux. Enabling information should highlight any missing includes.
It could also be the case that all the system includes are provided and that the Windows versions of them (or the Windows SDK includes which are not used on Linux) are much heavier. Adapting the simplecpp selfcheck for Windows might be a first step to get a baseline difference of the cost of the system includes.
Are the Linux binaries built with Boost?
they used Cppcheck Premium binaries and these do not use boost.
From my own experience I would assume be that there is an issue in the configuration leading to Windows having to process more code from includes than Linux.
That is very reasonable but I do not think that is the case. We have significant multithreading slowdown when checking cppcheck source code with such command also:
cd path/to/cppcheck
cppcheck -D__CPPCHECK__ -D__GNUC__ --showtime=file-total -jX lib
cppcheck -D__CPPCHECK__ -D__GNUC__ --showtime=file-total -jX lib
For information.. when I run this command on my machine I get such timings:
Linux: j1: Check time: lib/tokenize.cpp: 5.02091s j16: Check time: lib/tokenize.cpp: 6.39582s Linux multithreading overhead: +27.4%
Windows: j1: Check time: lib/tokenize.cpp: 45.634s j16: Check time: lib/tokenize.cpp: 79.749s Windows multithreading overhead: +74.8%
So analysis in Windows is more than 10x slower than analysis in Linux on my machine if -j16 is used.
The open source cppcheck windows release binary is a bit faster but it's not much.
As far as I see, switching from msvc to Clang make it even slower. :-(
Please also run Linux with --executor=thread (quite some data has been added to the IPC recently) and Windows with Boost.
I did some short test on my system with a smaller file and I did not see any differences between using thread or not.
The process executor code also has some other overhead unrelated to the IPC which will go away when I am done with reworking the executors. That getting awfully close but needs the remaining suppression issues resolved first since those require changes on how we set up certain things for the analysis. If no new blockers show up there I hope it will finally doe by the end of the year (main blocker is the time it takes to get stuff reviewed though).
That --executor=thread output is really interesting.
I wrote this script:
/opt/cppcheckpremium/cppcheck --executor=thread -D__CPPCHECK__ -D__GNUC__ --showtime=file-total -j1 lib > t1
/opt/cppcheckpremium/cppcheck --executor=thread -D__CPPCHECK__ -D__GNUC__ --showtime=file-total -j16 lib > t16
/opt/cppcheckpremium/cppcheck -D__CPPCHECK__ -D__GNUC__ --showtime=file-total -j1 lib > p1
/opt/cppcheckpremium/cppcheck -D__CPPCHECK__ -D__GNUC__ --showtime=file-total -j16 lib > p16
grep "Check time: lib/tokenize.cpp:" t1 t16 p1 p16
And got this output:
t1:Check time: lib/tokenize.cpp: 5.17966s
t16:Check time: lib/tokenize.cpp: 49.8277s
p1:Check time: lib/tokenize.cpp: 4.84888s
p16:Check time: lib/tokenize.cpp: 7.88717s
Does this indicate that if we implement a process executor in windows we might get 10x performance boost?
Does this indicate that if we implement a process executor in windows we might get 10x performance boost?
No - it indicates there is an concurrency issue when using std::thread. I have only ever profiled for instructions which does not account for any wait times. I will take a look later.
It should be the other way around (albeit not as drastic). That is also apparent from the unit tests where tests using the process executor are much slower than tests using the thread one.
But none of this has to do with the changes in this PR which will speed up things no matter the platform or threading.
Do you actually have 16 thread available on your system? If you specify more threads than available the analysis will obviously be much slower since you are overloading the system. As a rule of thumb you should only utilize about 80% of the available threads if you are actively using the system,
Do you actually have 16 thread available on your system? If you specify more threads than available the analysis will obviously be much slower since you are overloading the system. As a rule of thumb you should only utilize about 80% of the available threads if you are actively using the system,
Please disregard. In that case it should also happen when using the processes.
I am not able to reproduce this locally.
I also thought it might have been caused by your CPU potentially having P- and E-cores but it not occurring with processes also rules this out.
I don't know what happened but right now the times looks better. Here are times measured with time:
t1:real 1m5,429s
t8:real 0m15,339s
t16:real 0m13,304s
p1:real 1m7,956s
p8:real 0m16,380s
p16:real 0m13,441s
So the p16 does not stand out this time.
Quality Gate passed
Issues
2 New issues
0 Accepted issues
Measures
0 Security Hotspots
0.0% Coverage on New Code
0.0% Duplication on New Code
So the p16 does not stand out this time.
So maybe it is E-cores? You should possibly look at the runtimes of all files.
During another test I saw a difference between using -j1 and -j2. The CTU analysis is run - even if there is only single file. I wonder if that is either unnecessary (with -j2) or might lead to false negatives (with -j1).