scikit-learn-intelex icon indicating copy to clipboard operation
scikit-learn-intelex copied to clipboard

[CI, enhancement] add pytorch+gpu testing ci

Open icfaust opened this issue 7 months ago • 8 comments

Description

This PR introduces a public GPU CI job to sklearnex. It is not fully featured but does provide first GPU testing publicly. Due to issues with n_jobs support (which are being addressed in #2364), run times are extremely long but viable. The GPU is only currently used in the sklearn conformance steps, not in sklearnex/onedal testing. This is because it tests without dpctl installed for GPU offloading. It will in the future extract queues from data in combination with PyTorch, which has Intel GPU capabilities since PyTorch 2.4 (https://docs.pytorch.org/docs/stable/notes/get_start_xpu.html). This will allow GPU testing in the other steps.

This CI is important for at least 3 reasons: sklearn tests array_api using CuPy, PyTorch and array_api_strict frameworks. PyTorch is the only non __sycl_usm_array_interface__ GPU data framework which is expected to work for both sklearn and sklearnex. Therefore 1) it provides an array_api-only GPU testing framework to validate with sklearn conformance 2) it is likely the first entry point for users who which to use Intel GPU data natively (due to the user base size). 3) It validates that sklearnex can properly function without dpctl installed for GPU use, removing limitations on python versions and dependency stability issues. Note PyTorch DOES NOT FOLLOW THE ARRAY_API STANDARD, sklearn uses array_api_compat to shoe-horn in pytorch support. There are quirks associated with PyTorch and should be tested by sklearnex. This has impacts on how we design our estimators, as checking for __array_namespace__ is insufficient if we wish to support PyTorch.

Unlike other public runners, it takes a strategy of splitting apart the build and test steps in to separate jobs. The test step occurs on a CPU-only runner and on a GPU runner at the same time. It does not use a virtual environment like Conda or venv for simplicity, however it can reuse all of the previously written infrastructure.

It uses Python 3.12 and sklearn 1.4 due to simplicity (i.e. to duplicate other GPU testing systems). This will be updated in a follow up PR as it becomes further used (likely requiring different deselections).

When successful, a large increase in code coverage should be observed in codecov, as code coverage is also made available.

This should be very important for validating array_api changes in the codebase coming soon, which would otherwise be obscured by dpctl.

This required the following changes:

  • A new additional job 'Identify oneDAL nightly' is created, which removes duplication of code in ci.yml, it will identify the oneDAL build to download for all of the GitHub actions CI runners.
  • Changes to run_sklearn_tests.sh were required to get the gpu deselections to work publicly.
  • Renamed 'oneDALNightly/pip' to 'oneDALNightly/venv' to signify that a virtual environment is used instead of the package manager
  • patching of assert_all_finite would fail in combination with array_api_dispatching, changes are made in daal4py, to only use DAAL in the case it is numpy or a dataframe. As PyTorch has a different use for the size attribute, changes needed to be made for it.
  • Checking and moving data from GPU to CPU was incorrectly written for array_api, as we did not have a GPU data framework to test against. We need to verify the device via the __dlpack_device__ attribute instead, and then use asarray if __array__ is available, or from_dlpack because the __dlpack__ attribute is available. This required exposing some dlpack enums for verification.
  • The PR includes changes from #2489 which were needed to limit the running time of CI, which will focus on PyTorch and numpy for CPU and GPU.
  • Deselection of some torch tests occur in line with the original array_api rollout (#2079)
  • Deselection of test_learning_curve_some_failing_fits_warning[42] because of unknown issue with _intercept_ and SVC on GPU. (Must be investigated)

This will require the following PRs afterwards (by theme):

  • #2364 fix issues with thread affinity/ Kubernetes pod operation for n_jobs
  • Introduce PyTorch to onedal/tests/utils/_dataframes_support.py and onedal/tests/utils/_device_selection.py to have public GPU testing in sklearnex.
  • Rewrite from_data in onedal/utils/_sycl_queue_manager.py to extract queues from __dlpack__ data (special PyTorch interface already in place in pybind11).
  • Introduce a lazy loading approach for frameworks torch, dpnp, and dpctl.tensor due to load times in a centralized way (likely following strategy laid out in array_api_compat).
  • Update the sklearn version to not replicate other CI systems
  • Fix issue with SVC and _intercept_ attribute for (test_learning_curve_some_failing_fits_warning[42] sklearn conformance test)

No performance benchmarks necessary


PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed. This approach ensures that reviewers don't spend extra time asking for regular requirements.

You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way. For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).

Checklist to comply with before moving PR from draft:

PR completeness and readability

  • [x] I have reviewed my changes thoroughly before submitting this pull request.
  • [x] I have commented my code, particularly in hard-to-understand areas.
  • [x] I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • [x] Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • [x] I have added a respective label(s) to PR if I have a permission for that.
  • [x] I have resolved any merge conflicts that might occur with the base branch.

Testing

  • [x] I have run it locally and tested the changes extensively.
  • [x] All CI jobs are green or I have provided justification why they aren't.
  • [x] I have extended testing suite if new functionality was introduced in this PR.

Performance

  • [x] I have measured performance for affected algorithms using scikit-learn_bench and provided at least summary table with measured data, if performance change is expected.
  • [x] I have provided justification why performance has changed or why changes are not expected.
  • [x] I have provided justification why quality metrics have changed or why changes are not expected.
  • [x] I have extended benchmarking suite and provided corresponding scikit-learn_bench PR if new measurable functionality was introduced in this PR.

icfaust avatar May 26 '25 22:05 icfaust

Codecov Report

Attention: Patch coverage is 41.17647% with 10 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
onedal/_device_offload.py 33.33% 6 Missing and 2 partials :warning:
onedal/datatypes/table.cpp 0.00% 0 Missing and 2 partials :warning:
Flag Coverage Δ
azure 79.84% <46.66%> (-0.09%) :arrow_down:
github 73.60% <41.17%> (+1.98%) :arrow_up:

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
sklearnex/utils/validation.py 69.33% <100.00%> (+0.84%) :arrow_up:
onedal/datatypes/table.cpp 51.92% <0.00%> (-1.02%) :arrow_down:
onedal/_device_offload.py 75.60% <33.33%> (-5.43%) :arrow_down:

... and 18 files with indirect coverage changes

:rocket: New features to boost your workflow:
  • :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

codecov[bot] avatar May 26 '25 23:05 codecov[bot]

@icfaust Could the jobs be renamed to have 'torch' in the names instead of generic 'gpu'?

david-cortes-intel avatar Jun 09 '25 07:06 david-cortes-intel

@icfaust Could the jobs be renamed to have 'torch' in the names instead of generic 'gpu'?

I'll make sure to add the frameworks to the title. I'd like to also still keep the cpu vs gpu because of how it impacts sklearn conformance testing. Its the only piece currently running on GPU in this PR. Let me know what you think.

icfaust avatar Jun 09 '25 13:06 icfaust

@icfaust Could the jobs be renamed to have 'torch' in the names instead of generic 'gpu'?

I'll make sure to add the frameworks to the title. I'd like to also still keep the cpu vs gpu because of how it impacts sklearn conformance testing. Its the only piece currently running on GPU in this PR. Let me know what you think.

torch x [cpu, gpu] sounds good.

david-cortes-intel avatar Jun 09 '25 13:06 david-cortes-intel

@icfaust Is https://github.com/uxlfoundation/scikit-learn-intelex/pull/2465 meant to be merged before this?

david-cortes-intel avatar Jun 10 '25 08:06 david-cortes-intel

@icfaust Is #2465 meant to be merged before this?

Another good question. It isn't a requirement and are independent of one another. They are related as they are both testing sklearnex on GPU vs sklearn 1.4 in CI for the first time.

icfaust avatar Jun 10 '25 12:06 icfaust

Something broke with the new coverage stats and need to investigate it.

@icfaust It's a general world-wide GCP outage.

david-cortes-intel avatar Jun 13 '25 13:06 david-cortes-intel

/intelci: run

icfaust avatar Jun 17 '25 20:06 icfaust

/intelci: run

icfaust avatar Jun 18 '25 05:06 icfaust

Had to switch to just using math.prod(X.shape) just like sklearn due to private CI infrastructure issues. This removes the array_api_compat dependency.

icfaust avatar Jun 18 '25 07:06 icfaust

/intelci: run

icfaust avatar Jun 18 '25 07:06 icfaust