scikit-learn-intelex icon indicating copy to clipboard operation
scikit-learn-intelex copied to clipboard

[enhancement] WIP new finite checking in SVM algorithms

Open icfaust opened this issue 1 year ago • 2 comments

Description

Add a comprehensive description of proposed changes

List associated issue number(s) if exist(s): #6 (for example)

Documentation PR (if needed): #1340 (for example)

Benchmarks PR (if needed): https://github.com/IntelPython/scikit-learn_bench/pull/155 (for example)


PR should start as a draft, then move to ready for review state after CI is passed and all applicable checkboxes are closed. This approach ensures that reviewers don't spend extra time asking for regular requirements.

You can remove a checkbox as not applicable only if it doesn't relate to this PR in any way. For example, PR with docs update doesn't require checkboxes for performance while PR with any change in actual code should have checkboxes and justify how this code change is expected to affect performance (or justification should be self-evident).

Checklist to comply with before moving PR from draft:

PR completeness and readability

  • [ ] I have reviewed my changes thoroughly before submitting this pull request.
  • [ ] I have commented my code, particularly in hard-to-understand areas.
  • [ ] I have updated the documentation to reflect the changes or created a separate PR with update and provided its number in the description, if necessary.
  • [ ] Git commit message contains an appropriate signed-off-by string (see CONTRIBUTING.md for details).
  • [ ] I have added a respective label(s) to PR if I have a permission for that.
  • [ ] I have resolved any merge conflicts that might occur with the base branch.

Testing

  • [ ] I have run it locally and tested the changes extensively.
  • [ ] All CI jobs are green or I have provided justification why they aren't.
  • [ ] I have extended testing suite if new functionality was introduced in this PR.

Performance

  • [ ] I have measured performance for affected algorithms using scikit-learn_bench and provided at least summary table with measured data, if performance change is expected.
  • [ ] I have provided justification why performance has changed or why changes are not expected.
  • [ ] I have provided justification why quality metrics have changed or why changes are not expected.
  • [ ] I have extended benchmarking suite and provided corresponding scikit-learn_bench PR if new measurable functionality was introduced in this PR.

icfaust avatar Dec 04 '24 09:12 icfaust

/intelci: run

icfaust avatar Dec 04 '24 23:12 icfaust

/intelci: run

icfaust avatar Dec 05 '24 12:12 icfaust

@icfaust Please remember to add the classes that this PR covers to this list in the docs: https://github.com/uxlfoundation/scikit-learn-intelex/blob/91885a302e7516f762424f8af2cac27a50c25849/doc/sources/array_api.rst?plain=1#L84

david-cortes-intel avatar Oct 17 '25 14:10 david-cortes-intel

Codecov Report

:x: Patch coverage is 85.53114% with 79 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
sklearnex/svm/_base.py 88.41% 26 Missing and 12 partials :warning:
sklearnex/svm/_classes.py 84.34% 15 Missing and 3 partials :warning:
onedal/svm/tests/test_csr_svm.py 46.15% 11 Missing and 3 partials :warning:
onedal/svm/svm.py 88.23% 7 Missing and 1 partial :warning:
sklearnex/svm/__init__.py 66.66% 0 Missing and 1 partial :warning:
Flag Coverage Δ
azure 80.55% <82.96%> (-0.65%) :arrow_down:
github ?

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
onedal/svm/__init__.py 100.00% <100.00%> (ø)
sklearnex/_utils.py 84.81% <100.00%> (-0.74%) :arrow_down:
sklearnex/utils/class_weight.py 79.48% <100.00%> (+5.12%) :arrow_up:
sklearnex/svm/__init__.py 50.00% <66.66%> (-16.67%) :arrow_down:
onedal/svm/svm.py 88.39% <88.23%> (-2.36%) :arrow_down:
onedal/svm/tests/test_csr_svm.py 52.72% <46.15%> (ø)
sklearnex/svm/_classes.py 84.34% <84.34%> (ø)
sklearnex/svm/_base.py 88.41% <88.41%> (ø)

... and 6 files with indirect coverage changes

:rocket: New features to boost your workflow:
  • :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

codecov[bot] avatar Oct 20 '25 12:10 codecov[bot]

/intelci: run

icfaust avatar Oct 20 '25 13:10 icfaust

/intelci: run

icfaust avatar Oct 20 '25 20:10 icfaust

/intelci: run

icfaust avatar Oct 20 '25 22:10 icfaust

/intelci: run

icfaust avatar Oct 21 '25 09:10 icfaust

/intelci: run

icfaust avatar Oct 21 '25 14:10 icfaust

Looks like great work here, and comprehensive description of changes! It is quite a large PR that might benefit from a dedicated review session?

Also looks like we are hitting a limit on docbuild job related to medium.com

ethanglaser avatar Oct 21 '25 15:10 ethanglaser

@icfaust I'm not sure if this is specific to SVMs or whether this PR was meant to fix it, but allow_fallback_to_host still doesn't work with the dispatching logic changes in this PR:

import os
os.environ["SKLEARNEX_VERBOSE"] = "INFO"
import numpy as np
from sklearnex import config_context
from sklearnex.svm import SVC
rng = np.random.default_rng(seed=123)
X = rng.standard_normal(size=(100,10)).astype(np.float32)
y = rng.integers(3, size=100).astype(np.float32)
model = SVC(random_state=123)
with config_context(target_offload="gpu", allow_fallback_to_host=True):
    model.fit(X, y) 
RuntimeError: SVM with multiclass support is not implemented for GPU

Logs show:

INFO:sklearnex: sklearn.svm.SVC.fit: running accelerated version on CPU

david-cortes-intel avatar Oct 22 '25 09:10 david-cortes-intel

@icfaust I'm not sure if this is specific to SVMs or whether this PR was meant to fix it, but allow_fallback_to_host still doesn't work with the dispatching logic changes in this PR:

import os
os.environ["SKLEARNEX_VERBOSE"] = "INFO"
import numpy as np
from sklearnex import config_context
from sklearnex.svm import SVC
rng = np.random.default_rng(seed=123)
X = rng.standard_normal(size=(100,10)).astype(np.float32)
y = rng.integers(3, size=100).astype(np.float32)
model = SVC(random_state=123)
with config_context(target_offload="gpu", allow_fallback_to_host=True):
    model.fit(X, y) 
RuntimeError: SVM with multiclass support is not implemented for GPU

Logs show:

INFO:sklearnex: sklearn.svm.SVC.fit: running accelerated version on CPU

Error is unrelated to the PR. It looks like there is an issue associated with the fallback queue logic in onedal.utils._sycl_queue_manager.manage_global_queue when done in a nested fashion (which is the case when offloading for assert_all_finite in onedal.utils.validation). The fallback will be set in sklearnex._device_offload, but the use of manage_global_queue(None, X) in assert_all_finite will reset it back to the target_offload value upon completion in the finally step. Someone else has to fix this and add tests. I do not have time.

icfaust avatar Oct 22 '25 12:10 icfaust

/intelci: run

ethanglaser avatar Oct 22 '25 14:10 ethanglaser

/intelci: run

icfaust avatar Oct 23 '25 08:10 icfaust

/intelci: run

icfaust avatar Oct 25 '25 01:10 icfaust

/intelci: run

icfaust avatar Nov 12 '25 20:11 icfaust

/intelci: run

yuejiaointel avatar Nov 18 '25 08:11 yuejiaointel

The CI issues:

E AssertionError: SVC failed when fitted on one label after sample_weight trimming. Error message is not explicit, it should have 'class'.

Looks like it could be solved by adding an extra check for single-class data in the patching conditions.

For the other issue:

WARNING: The candidate selected for download or install is a yanked version

It should be solvable by merging the latest master.

david-cortes-intel avatar Nov 18 '25 16:11 david-cortes-intel

Looks like these changes will be required for sklearn1.8, as otherwise conformance tests throw errors about 'xp' argument in some methods.

david-cortes-intel avatar Nov 24 '25 16:11 david-cortes-intel

/intelci: run

Vika-F avatar Nov 26 '25 12:11 Vika-F

There will be some changes required for sklearn1.8 that generate merge conflicts with this PR: https://github.com/uxlfoundation/scikit-learn-intelex/pull/2801

Perhaps they could be all incorporated here instead if it makes the merging easier.

david-cortes-intel avatar Nov 27 '25 08:11 david-cortes-intel

There will be some changes required for sklearn1.8 that generate merge conflicts with this PR: #2801

Perhaps they could be all incorporated here instead if it makes the merging easier.

@david-cortes-intel Ok, I will do that. Anyway I will be fixing pre-commit issues here.

Vika-F avatar Nov 27 '25 08:11 Vika-F

/intelci: run

Vika-F avatar Nov 27 '25 12:11 Vika-F

/intelci: run

Vika-F avatar Nov 27 '25 14:11 Vika-F

/intelci: run

david-cortes-intel avatar Nov 28 '25 08:11 david-cortes-intel

/intelci: run

david-cortes-intel avatar Nov 28 '25 09:11 david-cortes-intel

/intelci: run

Vika-F avatar Dec 02 '25 13:12 Vika-F

/intelci: run

Vika-F avatar Dec 02 '25 16:12 Vika-F

/intelci: run

Vika-F avatar Dec 03 '25 10:12 Vika-F

/intelci: run

Vika-F avatar Dec 08 '25 12:12 Vika-F