Try building with `-O3` by default ?
While testing local PGO builds, I noticed1 that the optimisation flags enabled by -fprofile-use have a large overlap with those enabled by -O3:
enabled only by -O3
-fsplit-paths
-ftree-partial-pre
enabled either by -O3 or -fprofile-use
-fgcse-after-reload
-fipa-cp-clone
-floop-interchange
-floop-unroll-and-jam
-fpeel-loops
-fpredictive-commoning
-fsplit-loops
-ftree-loop-distribution
-ftree-loop-distribute-patterns
-funswitch-loops
-fvect-cost-model=dynamic
-fversion-loops-for-strides
enabled only by -fprofile-use
-fbranch-probabilities
-fprofile-reorder-functions
-fprofile-values
-ftracer
-funroll-loops
-fvpt
While working to make PGO easier to use, would it make sense to try (again?) the impact of -O3 on a full CMSSW build ?
This might be even more interesting in conjunction with -march=x86-63-v3 or -march=x86-63-v4.
[1] https://gcc.gnu.org/onlinedocs/gcc-12.3.0/gcc/Optimize-Options.html
assign core
New categories assigned: core
@Dr15Jones,@makortel,@smuzaffar you have been requested to review this Pull request/Issue and eventually sign? Thanks
cms-bot internal usage
A new Issue was created by @fwyzard.
@smuzaffar, @makortel, @Dr15Jones, @sextonkennedy, @antoniovilela, @rappoccio can you please review it and eventually sign/assign? Thanks.
cms-bot commands are listed here
-O3 seems to have a non-negligible impact on the HLT performance.
The reference, using CMSSW_14_0_6_MULTIARCHS in x86-64-v3 mode, is
Running 4 times over 10300 events with 8 jobs, each with 32 threads, 24 streams and 1 GPUs
691.1 ± 0.2 ev/s (10000 events, 98.6% overlap)
693.9 ± 0.2 ev/s (10000 events, 98.4% overlap)
695.3 ± 0.2 ev/s (10000 events, 99.4% overlap)
693.4 ± 0.2 ev/s (10000 events, 99.1% overlap)
--------------------
693.5 ± 1.7 ev/s
The same rebuilt with -O3 gives a 2% speed up:
Running 4 times over 10300 events with 8 jobs, each with 32 threads, 24 streams and 1 GPUs
710.6 ± 0.2 ev/s (10000 events, 99.3% overlap)
709.1 ± 0.2 ev/s (10000 events, 98.9% overlap)
711.9 ± 0.2 ev/s (10000 events, 99.1% overlap)
710.3 ± 0.2 ev/s (10000 events, 98.9% overlap)
--------------------
710.5 ± 1.2 ev/s
@missirol FYI
I wonder what level of physics validation will be needed. I guess the tests in https://github.com/cms-sw/cmsdist/pull/9183 at least will tell us something.
In principle we should expect the same results with -O2 and -O3 ?
https://github.com/cms-sw/cmsdist/pull/9183#issuecomment-2105147883 indeed suggests there would be no changes in results (or, any potential change would be more rare than probed by the PR tests)
In principle we should expect the same results with
-O2and-O3?
One could hope. At -O3 the compiler will be more aggressive about taking advantage of undefined behavior, so different results are likely bugs.
An indeed there are differences :-/
Running the HLT over 58k events, I find 51 events with at least one difference:
Events Accepted Gained Lost Other Trigger
58275 1 - - ~10 HLT_DoubleEle8_CaloIdM_TrackIdM_Mass8_DZ_PFHT350_v28
58275 2 - - ~10 HLT_DoubleEle8_CaloIdM_TrackIdM_Mass8_PFHT350_v28
58275 144 - -2 ~1 HLT_Ele30_WPTight_Gsf_v9
58275 137 - -2 ~1 HLT_Ele32_WPTight_Gsf_v23
58275 113 - -2 ~1 HLT_Ele35_WPTight_Gsf_v17
58275 86 - -1 ~1 HLT_Ele38_WPTight_Gsf_v17
58275 74 - - ~1 HLT_Ele40_WPTight_Gsf_v17
58275 139 - -2 ~1 HLT_Ele32_WPTight_Gsf_L1DoubleEG_v17
58275 2 - - ~1 HLT_Mu8_Ele8_CaloIdM_TrackIdM_Mass8_PFHT350_DZ_v27
58275 5 - - ~1 HLT_Mu8_Ele8_CaloIdM_TrackIdM_Mass8_PFHT350_v27
58275 13 - - ~3 HLT_Ele23_Ele12_CaloIdL_TrackIdL_IsoVL_DZ_v27
58275 23 +1 -1 ~1 HLT_Ele23_Ele12_CaloIdL_TrackIdL_IsoVL_v27
58275 49 - -1 ~1 HLT_Ele30_eta2p1_WPTight_Gsf_CentralPFJet35_EleCleaned_v21
58275 27 - - ~2 HLT_Ele28_eta2p1_WPTight_Gsf_HT150_v21
58275 7 - - ~16 HLT_Ele28_HighEta_SC20_Mass55_v21
58275 6 - - ~1 HLT_DiMu4_Ele9_CaloIdL_TrackIdL_DZ_Mass3p8_v25
58275 11 - - ~2 HLT_Ele14_eta2p5_IsoVVVL_Gsf_PFHT200_PNetBTag0p53_v1
58275 17 - - ~3 HLT_Ele50_CaloIdVT_GsfTrkIdT_PFJet165_v26
58275 9 - - ~1 HLT_Ele24_eta2p1_WPTight_Gsf_LooseDeepTauPFTauHPS30_eta2p1_CrossL1_v9
58275 5 - - ~1 HLT_Ele24_eta2p1_WPTight_Gsf_PNetTauhPFJet30_Tight_eta2p3_CrossL1_v2
58275 7 - - ~1 HLT_Ele24_eta2p1_WPTight_Gsf_PNetTauhPFJet30_Medium_eta2p3_CrossL1_v2
58275 8 - - ~1 HLT_Ele24_eta2p1_WPTight_Gsf_PNetTauhPFJet30_Loose_eta2p3_CrossL1_v2
58275 27 - - ~1 HLT_QuadPFJet103_88_75_15_PNetBTag_0p4_VBF2_v2
58275 23 - - ~1 HLT_QuadPFJet103_88_75_15_PNet2BTag_0p4_0p12_VBF1_v2
58275 25 - - ~1 HLT_QuadPFJet105_88_76_15_PNetBTag_0p4_VBF2_v2
58275 22 - - ~1 HLT_QuadPFJet105_88_76_15_PNet2BTag_0p4_0p12_VBF1_v2
58275 16 - - ~1 HLT_QuadPFJet111_90_80_15_PNetBTag_0p4_VBF2_v2
58275 20 - - ~1 HLT_QuadPFJet111_90_80_15_PNet2BTag_0p4_0p12_VBF1_v2
58275 718 - - ~1 HLT_QuadPFJet100_88_70_30_v6
58275 602 - - ~1 HLT_QuadPFJet105_88_75_30_v5
58275 485 - - ~1 HLT_QuadPFJet111_90_80_30_v5
58275 17 - - ~1 HLT_QuadPFJet100_88_70_30_PNet1CvsAll0p5_VBF3Tight_v6
58275 14 - - ~1 HLT_QuadPFJet105_88_75_30_PNet1CvsAll0p5_VBF3Tight_v6
58275 4 - - ~1 HLT_QuadPFJet111_90_80_30_PNet1CvsAll0p6_VBF3Tight_v6
58275 2 - - ~3 HLT_Ele50_CaloIdVT_GsfTrkIdT_AK8PFJet220_SoftDropMass40_v8
58275 0 - - ~3 HLT_Ele50_CaloIdVT_GsfTrkIdT_AK8PFJet220_SoftDropMass40_PNetBB0p06_v5
58275 2 - - ~3 HLT_Ele50_CaloIdVT_GsfTrkIdT_AK8PFJet230_SoftDropMass40_v8
58275 0 - - ~3 HLT_Ele50_CaloIdVT_GsfTrkIdT_AK8PFJet230_SoftDropMass40_PNetBB0p06_v5
58275 0 - - ~3 HLT_Ele50_CaloIdVT_GsfTrkIdT_AK8PFJet230_SoftDropMass40_PNetBB0p10_v5
58275 36 +1 - ~2 HLT_DoubleEle6p5_eta1p22_mMax6_v8
58275 28 - - ~2 HLT_DoubleEle8_eta1p22_mMax6_v8
58275 14 - - ~1 HLT_DoubleEle10_eta1p22_mMax6_v8
58275 227 +5 -4 ~1 HLT_SingleEle8_v7
58275 196 - - ~1 HLT_VBF_DiPFJet75_45_Mjj800_DiPFJet60_v2
58275 177 - - ~1 HLT_VBF_DiPFJet75_45_Mjj850_DiPFJet60_v2
The results are reproducible:
- 2 runs with the baseline release (
-O2) show no differences between them - 2 runs with the rebuilt release (
-O3) show no differences between them -O3vs-O2gives the differences above
@cms-sw/reconstruction-l2 do you have any suggestions how we should proceed, to identify the differences in the reconstruction due to the more aggressive compiler optimisations ?
-O3 seems to have a non-negligible impact also on the performance of the offline reconstruction.
I've used the build from https://github.com/cms-sw/cmsdist/pull/9183 to compare the performance of the step3 of the workflow 12834.0_TTbar_14TeV+2024, running over 100 TTbar events with pileup from /RelValTTbar_14TeV/CMSSW_14_1_0_pre2-PU_140X_mcRun3_2024_realistic_v7_STD_2024_PU-v1/GEN-SIM-DIGI-RAW.
With CMSSW_14_1_X_2024-05-12-0000 I get
Running 3 times over 130 events with 1 jobs, each with 8 threads, 8 streams and 0 GPUs
0.440 ± 0.007 ev/s (100 events)
0.447 ± 0.005 ev/s (100 events)
0.448 ± 0.005 ev/s (100 events)
--------------------
0.445 ± 0.004 ev/s
With CMSSW_14_1_X_2024-05-12-0000 plus https://github.com/cms-sw/cmsdist/pull/9183 I get
Running 3 times over 130 events with 1 jobs, each with 8 threads, 8 streams and 0 GPUs
0.449 ± 0.007 ev/s (100 events)
0.455 ± 0.009 ev/s (100 events)
0.453 ± 0.008 ev/s (100 events)
--------------------
0.452 ± 0.003 ev/s
for a 1.6% speed up.
@cms-sw/reconstruction-l2 FYI
Repeating the measurement with more jobs and higher statistics, the results look more stable, with a 1.4% speed-up.
CMSSW_14_1_X_2024-05-12-0000 (-O2)
Running 4 times over 930 events with 16 jobs, each with 8 threads, 8 streams and 0 GPUs
6.203 ± 0.002 ev/s (900 events, 99.0% overlap)
6.202 ± 0.003 ev/s (900 events, 99.2% overlap)
6.213 ± 0.002 ev/s (900 events, 99.3% overlap)
6.193 ± 0.002 ev/s (900 events, 99.3% overlap)
--------------------
6.203 ± 0.008 ev/s
cms-sw/cmsdist#9183 (-O3)
Running 4 times over 930 events with 16 jobs, each with 8 threads, 8 streams and 0 GPUs
6.291 ± 0.002 ev/s (900 events, 99.4% overlap)
6.297 ± 0.002 ev/s (900 events, 99.1% overlap)
6.290 ± 0.003 ev/s (900 events, 98.8% overlap)
6.288 ± 0.002 ev/s (900 events, 99.1% overlap)
--------------------
6.291 ± 0.004 ev/s
I believe that the best would be to produce the usual Reco Timing pie chart in order to identify which RECO modules are responsible of the gain. I have to figure out how to plug https://github.com/cms-sw/cmsdist/pull/9183 in a stand alone way, would you have a recipe?
From https://cmssdt.cern.ch/SDT/jenkins-artifacts/pull-request-integration/PR-be9632/39349/summary.html it looks like, in principle, there are no salient physics RECO changes arising from the change.
Thanks
I have to figure out how to plug cms-sw/cmsdist#9183 in a stand alone way, would you have a recipe?
If you can generate the pie chart plots from a local release, you can run
/cvmfs/cms-ci.cern.ch/week1/cms-sw/cmsdist/9183/39349/install.sh
to set up a local build of CMSSW_14_1_X_2024-05-12-0000 rebuilt with -O3.
I've got some results based on lxplus8 but I don't trust them since they are somewhat fluctuating, I guess due to the lack of a dedicated machine. Since el8 is needed, could you advise on where to perform this test? For RECO profiling we used to launch tests on vocms011 but it is cc7 and the rebuilt with -O3 asks for el8. Thank you
You can use a container (e.g. by running cmssw-el8 first, see https://cms-sw.github.io/singularity.html for more information) to run el8 binaries on slc7 nodes, or you can use one of the cmsdev nodes (see e.g. https://github.com/cms-sw/cmssw/issues/44374).
I have finally some sensible results on a dedicated machine (cmsdev40), but based on 100 event jobs, so uncertainty is still large. I have added a Run4 PU profile to the wf that Andrea used (wf 12834.0) trying to enlarge the gain over MkFit and Pixel/Track seeding modules. That's why you see that an event takes ~15s to be reconstructed (single thread and single stream, FastTimer does not allow to run multistream[1])
Base CMSSW_14_1_X_2024-05-12-0000: https://www.hep.uniovi.es/jfernan/circles/web/piechart.php?local=false&dataset=base_11834.21_withPU&resource=time_real&colours=default&groups=packages&show_labels=true&threshold=0
Base+https://github.com/cms-sw/cmsdist/pull/9183 + Rebuild: https://www.hep.uniovi.es/jfernan/circles/web/piechart.php?local=false&dataset=pr9183cmsdist_11834.21_withPU_v2&resource=time_real&colours=default&groups=packages&show_labels=true&threshold=0
The gain seems general at the level 1% specially in RecoTracker, RecoParticleFlow and recoVertex.
I am running now on more events trying to reduce the uncertainty
[1] The framework is configured to use at least two streams, but the following modules require synchronizing on LuminosityBlock boundaries: FastTimerServiceClient fastTimerServiceClient
(single thread and single stream, FastTimer does not allow to run multistream[1])
[1] The framework is configured to use at least two streams, but the following modules require synchronizing on LuminosityBlock boundaries: FastTimerServiceClient fastTimerServiceClient
The real cause is that FastTimerServiceClient does not support concurrent LuminosityBlocks. You can work around the exception by setting the number of concurrent lumis explicitly along
process.options.numberOfConcurrentLuminosityBlocks = 1
after which any number of streams should work, at the cost of LuminosityBlock transition becoming more costly (which probably is not that important for this test)
The workaround should actually have been part of the exception message https://github.com/cms-sw/cmssw/blob/e95546f0a9c574532fe038a6483fb99e0a98a7b7/FWCore/Framework/src/EventProcessor.cc#L2477-L2479 Is there some way we could improve the exception message to make it more clear?
[1] The framework is configured to use at least two streams, but the following modules require synchronizing on LuminosityBlock boundaries: FastTimerServiceClient fastTimerServiceClient
The FastTimerServiceClient is supposed to run during the DQM harvesting, not during the reconstruction step.
I have removed the DQM and Validation parts of the step3 chunck that I am testing, but it seems that still the setting from Matti is needed
This is my cmsDriver config:
cmsDriver.py step3 -s RAW2DIGI,L1Reco,RECO,RECOSIM --conditions auto:phase1_2022_realistic --datatier AODSIM -n 100 --eventcontent AODSIM --geometry DB:Extended --era Run3 --pileup Run3_Flat55To75_PoissonOOTPU --pileup_input das:/RelValMinBias_14TeV/CMSSW_13_1_0_pre1-130X_mcRun3_2022_realistic_withNewBSFromEOY2022Data_v2_RV186-v1/GEN-SIM --filein file:step2.root --fileout file:step3.root --customise HLTrigger/Timer/FastTimer.customise_timer_service_singlejob --customise_commands "process.FastTimerService.writeJSONSummary=True;process.FastTimerService.jsonFileName="step3_circles.json"" &> step3_TTbar_14TeV+2021PU_ProdLike.log
This is my cmsDriver config:
cmsDriver.py step3 -s RAW2DIGI,L1Reco,RECO,RECOSIM --conditions auto:phase1_2022_realistic --datatier AODSIM -n 100 --eventcontent AODSIM --geometry DB:Extended --era Run3 --pileup Run3_Flat55To75_PoissonOOTPU --pileup_input das:/RelValMinBias_14TeV/CMSSW_13_1_0_pre1-130X_mcRun3_2022_realistic_withNewBSFromEOY2022Data_v2_RV186-v1/GEN-SIM --filein file:step2.root --fileout file:step3.root --customise HLTrigger/Timer/FastTimer.customise_timer_service_singlejob --customise_commands "process.FastTimerService.writeJSONSummary=True;process.FastTimerService.jsonFileName="step3_circles.json"" &> step3_TTbar_14TeV+2021PU_ProdLike.log
Do you actually care about the timing DQM plots ?
If you don't (for example, if you are interested only in the text log and or in the JSON output), it would be better not to enable the DQM part at all:
cmsDriver.py step3 ... --customise HLTrigger/Timer/FastTimer.addService --customise_commands 'process.FastTimerService.enableDQM=False;process.FastTimerService.writeJSONSummary=True;process.FastTimerService.jsonFileName="step3_circles.json"'
OK, thanks a lot, that explains why I could not run multithreading
Is there some way we could improve the exception message to make it more clear?
Probably not, it was my fault since I misread the workaround because there is a new empty line between the problem definition and the solution. I am sorry
Results with 1k events which confirm again the gain spread across all Reco packages without a clear winner:
Base: https://www.hep.uniovi.es/jfernan/circles/web/piechart.php?local=false&dataset=base_11834.21_withPU_1kevents&resource=time_real&colours=default&groups=packages&show_labels=true&threshold=0
Base+https://github.com/cms-sw/cmsdist/pull/9183 + Rebuild: https://www.hep.uniovi.es/jfernan/circles/web/piechart.php?local=false&dataset=pr9183cmsdist_11834.21_withPU_1kevents&resource=time_real&colours=default&groups=packages&show_labels=true&threshold=0
@makortel @fwyzard , how about testing this in a dedicated CMSSW_14_1_O3_X IB?
IMHO we could simply merge it, and use the next pre-release validation to spot any problems