Added SVE implementation to improve the performance on ARM architecture
Motivation: This pull request aims to improve the performance of training algorithm of XGBoost on ARM architecture by leveraging SVE intrinsics.
Brief description:
- This change of including SVE intrinsics improves the performance by 55% as compared to the ARM default.
- The modified function iterates over a row of data and updates a histogram based on the given indices and offsets.
- The accuracy has been verified after the modifications.
Thank you for the PR! I'm not an expert in SIMDs, is it guaranteed to have aligned pointers and padded memory allocation for the intrinsics?
Hi @trivialfis The code has been thoroughly validated to ensure alignment and padding issues are addressed. The datatypes have not been altered from the scalar code; instead, the original scalar operations have been translated into SIMD using equivalent SVE intrinsics and there are no compile-time errors. All potential accuracy issues have been resolved and verified on widely used datasets like Iris, Airlines delay and breast cancer detection.
Thank you for the detailed info. Could you please help explain why it works without the use of specialized allocators like https://en.cppreference.com/w/c/memory/aligned_alloc ? It's important for us to know the logic for future maintenance.
Specialized allocators like aligned_alloc() doesn't help with SVE intrinsics because:
- ARM's SVE SIMD architecture handles data processing in parallel, which inherently considers data alignment. For example for a 256 bit vector length system, we load 8 float elements (8*32) through VLA(vector length agnostic) instructions into a SVE register.
- Most of the instructions including widening and narrowing instructions helps take care of the data alignment.
Hi @trivialfis, Additionally, SVE also provides predicate registers enabling key features such as: a) Per-lane predication that allows SIMD instructions to be executed conditionally on specific lanes of a SIMD register b) Predicate-driven loop control and management that helps to manage data that does not align perfectly with the vector length.
Thank you for the detailed info. Could you please help explain why it works without the use of specialized allocators like https://en.cppreference.com/w/c/memory/aligned_alloc ? It's important for us to know the logic for future maintenance.
Hi @trivialfis,
As @divya2108 mentioned, SVE has predication support.
These lines create masks which limit the load/stores from going out of bounds: https://github.com/dmlc/xgboost/blob/5194c1726bfa7f5d1b10856d87889f4fc1871638/src/common/hist_util.cc#L265-L266
SVE is also happy to do element-aligned loads and stores rather than full vectors.
Thank you for the explanation! I will take a deeper look.
Started looking into this PR today. Thank you for working on using the arm intrinsic, but could you please add detailed code comments and extract the code into an independent section (like a function that can be inlined)? Most people here (me included) have a background closer to data science instead of low-level programming.
Hi @trivialfis, Thank you for suggesting the appropriate changes. I have made the modifications as recommended. Could you please review the updated changes?
The CMake logic looks right. It only compiles SVE code when the compiler supports it and during the runtime it triggers the SVE code only when the hardware supports SVE (I see there's a runtime HW check for SVE ISA). Changes LGTM.
The CMake logic looks right. It only compiles SVE code when the compiler supports it and during the runtime it triggers the SVE code only when the hardware supports SVE (I see there's a runtime HW check for SVE ISA). Changes LGTM.
Can you point me to where the runtime check happens? As far as I can tell, this only works if the build environment supports compiling and running SVE.
The new path is conditionally compiled with #ifdef XGBOOST_SVE_SUPPORT_DETECTED with no fallback at runtime here:
https://github.com/dmlc/xgboost/pull/10680/files#diff-def34075edb2b3bdb6dc7b5ebcffd518793520fd4fffd70870b12f076a3cb481R305-R308
This makes me think this is only for users compiling from sources on a specific piece of hardware. If we wanted this to work in the generically distributed wheel, we'd have to do the SVE runtime check instead of the #ifdef.
Correct me if I'm wrong 😸
The CMake logic looks right. It only compiles SVE code when the compiler supports it and during the runtime it triggers the SVE code only when the hardware supports SVE (I see there's a runtime HW check for SVE ISA). Changes LGTM.
Can you point me to where the runtime check happens? As far as I can tell, this only works if the build environment supports compiling and running SVE.
The new path is conditionally compiled with
#ifdef XGBOOST_SVE_SUPPORT_DETECTEDwith no fallback at runtime here: https://github.com/dmlc/xgboost/pull/10680/files#diff-def34075edb2b3bdb6dc7b5ebcffd518793520fd4fffd70870b12f076a3cb481R305-R308This makes me think this is only for users compiling from sources on a specific piece of hardware. If we wanted this to work in the generically distributed wheel, we'd have to do the SVE runtime check instead of the
#ifdef.Correct me if I'm wrong 😸
I agree with you. I found the HW detection logic here: https://github.com/dmlc/xgboost/pull/10680/files#diff-5650b69c609ef22dea88915eb256a6838341248d3ddfd17430388f7f7e58c4feR24
But this is just for compile time. I think similar logic need to be used during runtime and a runtime check is required.
Since there's already a working SVE HW detection logic, should be easy to reintroduce it in the source code file.
CC @divya2108
Is SVE guaranteed to be available for ARM implementation?
Is SVE guaranteed to be available for ARM implementation?
No, SVE is not guaranteed to be available on all ARM implementations. While ARMv8-A architecture, which includes SVE support, is present in newer processors like Graviton3, Graviton4, Grace, it is not mandatory for all ARM CPUs to implement SVE. The code in hist_util.cc checks for SVE support at runtime to ensure that the target hardware supports it & runs the default code otherwise.
The CMake logic looks right. It only compiles SVE code when the compiler supports it and during the runtime it triggers the SVE code only when the hardware supports SVE (I see there's a runtime HW check for SVE ISA). Changes LGTM.
Can you point me to where the runtime check happens? As far as I can tell, this only works if the build environment supports compiling and running SVE. The new path is conditionally compiled with
#ifdef XGBOOST_SVE_SUPPORT_DETECTEDwith no fallback at runtime here: https://github.com/dmlc/xgboost/pull/10680/files#diff-def34075edb2b3bdb6dc7b5ebcffd518793520fd4fffd70870b12f076a3cb481R305-R308 This makes me think this is only for users compiling from sources on a specific piece of hardware. If we wanted this to work in the generically distributed wheel, we'd have to do the SVE runtime check instead of the#ifdef. Correct me if I'm wrong 😸I agree with you. I found the HW detection logic here: https://github.com/dmlc/xgboost/pull/10680/files#diff-5650b69c609ef22dea88915eb256a6838341248d3ddfd17430388f7f7e58c4feR24
But this is just for compile time. I think similar logic need to be used during runtime and a runtime check is required.
Since there's already a working SVE HW detection logic, should be easy to reintroduce it in the source code file.
CC @divya2108
Yes, I agree. Thank you for bringing this to notice. I have added a SVE hardware check at runtime. Now it is generically compiled and falls back on the default code if SVE hardware support is not detected.
I have verified this by building the code on different architectures. Here is a summary for more clarity:
@trivialfis Thanks for the initial review and your comments. Can you please suggest any additional feedback which might need further clarification/evaluation from our side or any improvements to incorporate in the proposed implementation. Thanks. cc: @divya2108
Sorry for the slow reply, got stuck at some other work lately. One question, is it possible to reduce the call frequency of check_sve_hw_support to maybe once per training session?
Sorry for the slow reply, got stuck at some other work lately. One question, is it possible to reduce the call frequency of
check_sve_hw_supportto maybe once per training session?
Yes, it's possible to reduce the frequency of calls to check_sve_hw_support by implementing a caching mechanism that checks the SVE hardware support status only once at the beginning of a training session. I have stored the result and it is being reused throughout the session.
Hi @trivialfis, just wanted to follow up on the code review. Let me know if you need any additional details or clarifications.
Hi @trivialfis, just wanted to follow up on the code review. Let me know if you need any additional details or clarifications.
Hi @trivialfis , we have pushed all the necessary changes. Kindly review and let us know for any additional details or modifications required. Thanks in advance for your time while reviewing this. Thanks.
Apologies for the slow response, will look into this. Thank you very much for your patience!
@hcho3 I see you have assigned yourself to the PR. Thank you for volunteering! Feel free to review the code.
I recently got access to a Grace machine and might be able to do some tests there.
Sorry it must have been by mistake. I will try to look at the PR for the next few days however.
We should ensure that the CI pipeline tests XGBoost with SVE intrinsic.
Is it enabled on the CI?
Thanks @trivialfis for your review and contributions to this implementation! Please let us if you would like us to contribute any additional fixes based on review comments. We would like to confirm with you before proceeding to avoid any duplicate work. Thanks again for your time in review of this PR. We appreciate! cc: @divya2108
@hcho3 Do you think it's possible to have this in the pip wheel?
Could you please share the CPU you were using for the benchmarks? I ran a benchmark on a Grace machine (I work for NVIDIA) with synthetic data, and the performance is actually lower. I have verified that the row-wise kernel is being used.
My synthetic data:
- n_samples: 67108864
- n_features: 256
Training parameters:
- 64 iterations
- 6 max depth
Compilers:
- g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
* SVE disabled
[63] Train-rmse:18.88612
Qdm train (sec) ended in: 123.9944839477539 seconds.
Trained for 64 iterations.
{'load-batches': {'load (sec)': 7.225183725357056}, 'load-all': {'concat (sec)': 1.3589859008789062e-05}, 'Qdm': {'Train-DMatrix (sec)': 29.529566526412964, 'train (sec)': 123.9944839477539}}
* SVE enabled
[63] Train-rmse:18.88612
Qdm train (sec) ended in: 154.86435317993164 seconds.
Trained for 64 iterations.
{'load-batches': {'load (sec)': 7.193156003952026}, 'load-all': {'concat (sec)': 1.430511474609375e-05}, 'Qdm': {'Train-DMatrix (sec)': 29.482257604599, 'train (sec)': 154.86435317993164}}
It's okay to be slower on certain platforms, we can look for a way to disable it. But I would like to get some understanding of how the performance works for your platform as well.
Do you think it's possible to have this in the pip wheel?
Yes, it should be possible.
Could you please share the CPU you were using for the benchmarks? I ran a benchmark on a Grace machine (I work for NVIDIA) with synthetic data, and the performance is actually lower. I have verified that the row-wise kernel is being used.
My synthetic data:
- n_samples: 67108864
- n_features: 256
Training parameters:
- 64 iterations
- 6 max depth
Compilers:
- g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
* SVE disabled [63] Train-rmse:18.88612 Qdm train (sec) ended in: 123.9944839477539 seconds. Trained for 64 iterations. {'load-batches': {'load (sec)': 7.225183725357056}, 'load-all': {'concat (sec)': 1.3589859008789062e-05}, 'Qdm': {'Train-DMatrix (sec)': 29.529566526412964, 'train (sec)': 123.9944839477539}} * SVE enabled [63] Train-rmse:18.88612 Qdm train (sec) ended in: 154.86435317993164 seconds. Trained for 64 iterations. {'load-batches': {'load (sec)': 7.193156003952026}, 'load-all': {'concat (sec)': 1.430511474609375e-05}, 'Qdm': {'Train-DMatrix (sec)': 29.482257604599, 'train (sec)': 154.86435317993164}}It's okay to be slower on certain platforms, we can look for a way to disable it. But I would like to get some understanding of how the performance works for your platform as well.
These are the machine and dataset details which I used:
- AWS Graviton3, ARM-based CPU
- Dataset details: kaggle higgs boson dataset (250000 samples, 32 features)
- Training parameters: 120 iterations, 6 max_depth
- compiler: g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Hi @trivialfis, just wanted to follow up on the PR review. Let me know if there’s anything I can do to help or clarify. Thanks!
apologies for the slow response here. I need to test it on aws, and find a way to disable it for grace.