xgboost icon indicating copy to clipboard operation
xgboost copied to clipboard

Added SVE implementation to improve the performance on ARM architecture

Open divya2108 opened this issue 1 year ago • 46 comments

Motivation: This pull request aims to improve the performance of training algorithm of XGBoost on ARM architecture by leveraging SVE intrinsics.

Brief description:

  1. This change of including SVE intrinsics improves the performance by 55% as compared to the ARM default.
  2. The modified function iterates over a row of data and updates a histogram based on the given indices and offsets.
  3. The accuracy has been verified after the modifications.
image

divya2108 avatar Aug 06 '24 09:08 divya2108

Thank you for the PR! I'm not an expert in SIMDs, is it guaranteed to have aligned pointers and padded memory allocation for the intrinsics?

trivialfis avatar Aug 06 '24 15:08 trivialfis

Hi @trivialfis The code has been thoroughly validated to ensure alignment and padding issues are addressed. The datatypes have not been altered from the scalar code; instead, the original scalar operations have been translated into SIMD using equivalent SVE intrinsics and there are no compile-time errors. All potential accuracy issues have been resolved and verified on widely used datasets like Iris, Airlines delay and breast cancer detection.

divya2108 avatar Aug 07 '24 11:08 divya2108

Thank you for the detailed info. Could you please help explain why it works without the use of specialized allocators like https://en.cppreference.com/w/c/memory/aligned_alloc ? It's important for us to know the logic for future maintenance.

trivialfis avatar Aug 07 '24 15:08 trivialfis

Specialized allocators like aligned_alloc() doesn't help with SVE intrinsics because:

  1. ARM's SVE SIMD architecture handles data processing in parallel, which inherently considers data alignment. For example for a 256 bit vector length system, we load 8 float elements (8*32) through VLA(vector length agnostic) instructions into a SVE register.
  2. Most of the instructions including widening and narrowing instructions helps take care of the data alignment.

divya2108 avatar Aug 09 '24 07:08 divya2108

Hi @trivialfis, Additionally, SVE also provides predicate registers enabling key features such as: a) Per-lane predication that allows SIMD instructions to be executed conditionally on specific lanes of a SIMD register b) Predicate-driven loop control and management that helps to manage data that does not align perfectly with the vector length.

divya2108 avatar Aug 12 '24 15:08 divya2108

Thank you for the detailed info. Could you please help explain why it works without the use of specialized allocators like https://en.cppreference.com/w/c/memory/aligned_alloc ? It's important for us to know the logic for future maintenance.

Hi @trivialfis,

As @divya2108 mentioned, SVE has predication support.

These lines create masks which limit the load/stores from going out of bounds: https://github.com/dmlc/xgboost/blob/5194c1726bfa7f5d1b10856d87889f4fc1871638/src/common/hist_util.cc#L265-L266

SVE is also happy to do element-aligned loads and stores rather than full vectors.

Mousius avatar Aug 13 '24 17:08 Mousius

Thank you for the explanation! I will take a deeper look.

trivialfis avatar Aug 16 '24 19:08 trivialfis

Started looking into this PR today. Thank you for working on using the arm intrinsic, but could you please add detailed code comments and extract the code into an independent section (like a function that can be inlined)? Most people here (me included) have a background closer to data science instead of low-level programming.

Hi @trivialfis, Thank you for suggesting the appropriate changes. I have made the modifications as recommended. Could you please review the updated changes?

divya2108 avatar Aug 26 '24 08:08 divya2108

The CMake logic looks right. It only compiles SVE code when the compiler supports it and during the runtime it triggers the SVE code only when the hardware supports SVE (I see there's a runtime HW check for SVE ISA). Changes LGTM.

maajidkhann avatar Sep 03 '24 08:09 maajidkhann

The CMake logic looks right. It only compiles SVE code when the compiler supports it and during the runtime it triggers the SVE code only when the hardware supports SVE (I see there's a runtime HW check for SVE ISA). Changes LGTM.

Can you point me to where the runtime check happens? As far as I can tell, this only works if the build environment supports compiling and running SVE.

The new path is conditionally compiled with #ifdef XGBOOST_SVE_SUPPORT_DETECTED with no fallback at runtime here: https://github.com/dmlc/xgboost/pull/10680/files#diff-def34075edb2b3bdb6dc7b5ebcffd518793520fd4fffd70870b12f076a3cb481R305-R308

This makes me think this is only for users compiling from sources on a specific piece of hardware. If we wanted this to work in the generically distributed wheel, we'd have to do the SVE runtime check instead of the #ifdef.

Correct me if I'm wrong 😸

Mousius avatar Sep 03 '24 12:09 Mousius

The CMake logic looks right. It only compiles SVE code when the compiler supports it and during the runtime it triggers the SVE code only when the hardware supports SVE (I see there's a runtime HW check for SVE ISA). Changes LGTM.

Can you point me to where the runtime check happens? As far as I can tell, this only works if the build environment supports compiling and running SVE.

The new path is conditionally compiled with #ifdef XGBOOST_SVE_SUPPORT_DETECTED with no fallback at runtime here: https://github.com/dmlc/xgboost/pull/10680/files#diff-def34075edb2b3bdb6dc7b5ebcffd518793520fd4fffd70870b12f076a3cb481R305-R308

This makes me think this is only for users compiling from sources on a specific piece of hardware. If we wanted this to work in the generically distributed wheel, we'd have to do the SVE runtime check instead of the #ifdef.

Correct me if I'm wrong 😸

I agree with you. I found the HW detection logic here: https://github.com/dmlc/xgboost/pull/10680/files#diff-5650b69c609ef22dea88915eb256a6838341248d3ddfd17430388f7f7e58c4feR24

But this is just for compile time. I think similar logic need to be used during runtime and a runtime check is required.

Since there's already a working SVE HW detection logic, should be easy to reintroduce it in the source code file.

CC @divya2108

maajidkhann avatar Sep 03 '24 14:09 maajidkhann

Is SVE guaranteed to be available for ARM implementation?

trivialfis avatar Sep 07 '24 16:09 trivialfis

Is SVE guaranteed to be available for ARM implementation?

No, SVE is not guaranteed to be available on all ARM implementations. While ARMv8-A architecture, which includes SVE support, is present in newer processors like Graviton3, Graviton4, Grace, it is not mandatory for all ARM CPUs to implement SVE. The code in hist_util.cc checks for SVE support at runtime to ensure that the target hardware supports it & runs the default code otherwise.

divya2108 avatar Sep 09 '24 07:09 divya2108

The CMake logic looks right. It only compiles SVE code when the compiler supports it and during the runtime it triggers the SVE code only when the hardware supports SVE (I see there's a runtime HW check for SVE ISA). Changes LGTM.

Can you point me to where the runtime check happens? As far as I can tell, this only works if the build environment supports compiling and running SVE. The new path is conditionally compiled with #ifdef XGBOOST_SVE_SUPPORT_DETECTED with no fallback at runtime here: https://github.com/dmlc/xgboost/pull/10680/files#diff-def34075edb2b3bdb6dc7b5ebcffd518793520fd4fffd70870b12f076a3cb481R305-R308 This makes me think this is only for users compiling from sources on a specific piece of hardware. If we wanted this to work in the generically distributed wheel, we'd have to do the SVE runtime check instead of the #ifdef. Correct me if I'm wrong 😸

I agree with you. I found the HW detection logic here: https://github.com/dmlc/xgboost/pull/10680/files#diff-5650b69c609ef22dea88915eb256a6838341248d3ddfd17430388f7f7e58c4feR24

But this is just for compile time. I think similar logic need to be used during runtime and a runtime check is required.

Since there's already a working SVE HW detection logic, should be easy to reintroduce it in the source code file.

CC @divya2108

Yes, I agree. Thank you for bringing this to notice. I have added a SVE hardware check at runtime. Now it is generically compiled and falls back on the default code if SVE hardware support is not detected.

I have verified this by building the code on different architectures. Here is a summary for more clarity: image

divya2108 avatar Sep 09 '24 08:09 divya2108

@trivialfis Thanks for the initial review and your comments. Can you please suggest any additional feedback which might need further clarification/evaluation from our side or any improvements to incorporate in the proposed implementation. Thanks. cc: @divya2108

rageshhajela16 avatar Sep 20 '24 20:09 rageshhajela16

Sorry for the slow reply, got stuck at some other work lately. One question, is it possible to reduce the call frequency of check_sve_hw_support to maybe once per training session?

trivialfis avatar Sep 23 '24 19:09 trivialfis

Sorry for the slow reply, got stuck at some other work lately. One question, is it possible to reduce the call frequency of check_sve_hw_support to maybe once per training session?

Yes, it's possible to reduce the frequency of calls to check_sve_hw_support by implementing a caching mechanism that checks the SVE hardware support status only once at the beginning of a training session. I have stored the result and it is being reused throughout the session.

divya2108 avatar Oct 03 '24 08:10 divya2108

Hi @trivialfis, just wanted to follow up on the code review. Let me know if you need any additional details or clarifications.

divya2108 avatar Oct 09 '24 06:10 divya2108

Hi @trivialfis, just wanted to follow up on the code review. Let me know if you need any additional details or clarifications.

Hi @trivialfis , we have pushed all the necessary changes. Kindly review and let us know for any additional details or modifications required. Thanks in advance for your time while reviewing this. Thanks.

rageshhajela16 avatar Oct 17 '24 09:10 rageshhajela16

Apologies for the slow response, will look into this. Thank you very much for your patience!

trivialfis avatar Oct 17 '24 10:10 trivialfis

@hcho3 I see you have assigned yourself to the PR. Thank you for volunteering! Feel free to review the code.

I recently got access to a Grace machine and might be able to do some tests there.

trivialfis avatar Oct 18 '24 16:10 trivialfis

Sorry it must have been by mistake. I will try to look at the PR for the next few days however.

hcho3 avatar Oct 18 '24 20:10 hcho3

We should ensure that the CI pipeline tests XGBoost with SVE intrinsic.

Is it enabled on the CI?

trivialfis avatar Oct 22 '24 06:10 trivialfis

Thanks @trivialfis for your review and contributions to this implementation! Please let us if you would like us to contribute any additional fixes based on review comments. We would like to confirm with you before proceeding to avoid any duplicate work. Thanks again for your time in review of this PR. We appreciate! cc: @divya2108

rageshhajela16 avatar Nov 02 '24 13:11 rageshhajela16

@hcho3 Do you think it's possible to have this in the pip wheel?

trivialfis avatar Nov 07 '24 09:11 trivialfis

Could you please share the CPU you were using for the benchmarks? I ran a benchmark on a Grace machine (I work for NVIDIA) with synthetic data, and the performance is actually lower. I have verified that the row-wise kernel is being used.

My synthetic data:

  • n_samples: 67108864
  • n_features: 256

Training parameters:

  • 64 iterations
  • 6 max depth

Compilers:

  • g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
* SVE disabled

[63]    Train-rmse:18.88612
Qdm train (sec) ended in:  123.9944839477539 seconds.
Trained for 64 iterations.
{'load-batches': {'load (sec)': 7.225183725357056}, 'load-all': {'concat (sec)': 1.3589859008789062e-05}, 'Qdm': {'Train-DMatrix (sec)': 29.529566526412964, 'train (sec)': 123.9944839477539}}

* SVE enabled

[63]    Train-rmse:18.88612
Qdm train (sec) ended in:  154.86435317993164 seconds.
Trained for 64 iterations.
{'load-batches': {'load (sec)': 7.193156003952026}, 'load-all': {'concat (sec)': 1.430511474609375e-05}, 'Qdm': {'Train-DMatrix (sec)': 29.482257604599, 'train (sec)': 154.86435317993164}}

It's okay to be slower on certain platforms, we can look for a way to disable it. But I would like to get some understanding of how the performance works for your platform as well.

trivialfis avatar Nov 07 '24 11:11 trivialfis

Do you think it's possible to have this in the pip wheel?

Yes, it should be possible.

hcho3 avatar Nov 08 '24 01:11 hcho3

Could you please share the CPU you were using for the benchmarks? I ran a benchmark on a Grace machine (I work for NVIDIA) with synthetic data, and the performance is actually lower. I have verified that the row-wise kernel is being used.

My synthetic data:

  • n_samples: 67108864
  • n_features: 256

Training parameters:

  • 64 iterations
  • 6 max depth

Compilers:

  • g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
* SVE disabled

[63]    Train-rmse:18.88612
Qdm train (sec) ended in:  123.9944839477539 seconds.
Trained for 64 iterations.
{'load-batches': {'load (sec)': 7.225183725357056}, 'load-all': {'concat (sec)': 1.3589859008789062e-05}, 'Qdm': {'Train-DMatrix (sec)': 29.529566526412964, 'train (sec)': 123.9944839477539}}

* SVE enabled

[63]    Train-rmse:18.88612
Qdm train (sec) ended in:  154.86435317993164 seconds.
Trained for 64 iterations.
{'load-batches': {'load (sec)': 7.193156003952026}, 'load-all': {'concat (sec)': 1.430511474609375e-05}, 'Qdm': {'Train-DMatrix (sec)': 29.482257604599, 'train (sec)': 154.86435317993164}}

It's okay to be slower on certain platforms, we can look for a way to disable it. But I would like to get some understanding of how the performance works for your platform as well.

These are the machine and dataset details which I used:

  • AWS Graviton3, ARM-based CPU
  • Dataset details: kaggle higgs boson dataset (250000 samples, 32 features)
  • Training parameters: 120 iterations, 6 max_depth
  • compiler: g++ (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0

divya2108 avatar Nov 08 '24 08:11 divya2108

Hi @trivialfis, just wanted to follow up on the PR review. Let me know if there’s anything I can do to help or clarify. Thanks!

divya2108 avatar Nov 25 '24 06:11 divya2108

apologies for the slow response here. I need to test it on aws, and find a way to disable it for grace.

trivialfis avatar Nov 26 '24 08:11 trivialfis