scikit-learn-intelex icon indicating copy to clipboard operation
scikit-learn-intelex copied to clipboard

Possibly memory issues with SVC?

Open Stack-it-up opened this issue 2 years ago • 18 comments

Description I'm trying to use Intelex to accelerate training of a SVC. My dataset is pretty tame (18 MB, in fact, I am attaching it, since it is a publicly available dataset - Universal Dependencies ISDT). I wasn't expecting my 16GB of ram (and 16gb of swap) to be filled by this task, so I wonder if this could be a bug. However, I am a student, so it may be an error on my part (if so, I'm sorry).

To Reproduce Steps to reproduce the behavior:

  1. Download attached files in the same folder
  2. Change extension of train_parser from txt to py
  3. Install NLTK
  4. Run the python script
  5. See error

Expected behavior A new file should be created with the training output. Instead, an Out Of Memory error is raised.

Note on NLTK implementation The code for the function train is pretty straightforward, see source code here: https://www.nltk.org/_modules/nltk/parse/transitionparser.html#TransitionParser.train

Environment:

  • OS: Ubuntu 20.04
  • Intelex 2021.5
  • Python 3.9.11
  • scikit-learn 1.0.2
  • NLTK 3.7
  • conda 4.13.0
  • CPU: i5-10500

Attachments train_parser.txt it_isdt-ud-train.txt

EDIT: the svmlight file generated by NLTK is actually 62 MB and the memory used during sequential training (plain sklearn) is around 1GB

Stack-it-up avatar Jun 01 '22 21:06 Stack-it-up

How many threads are you using? SVM uses all available threads, so having N number of threads leads to consuming N times more ram. https://intel.github.io/scikit-learn-intelex/memory-requirements.html

FischyM avatar Jun 02 '22 19:06 FischyM

Thank you for your reply. My processor has 12 virtual cores so I shouldn't be able to process more thna 12 threads at once, is that right? I'm not sure if there is a way to set the maximum number of threads from Intelex.

Stack-it-up avatar Jun 02 '22 19:06 Stack-it-up

I'm running into the same problem right now actually. I'm not sure which environmental variable controls the number of threads, so I came to this GitHub to find out! I'll let you know if I find what we are looking for.

I do have these 5 to test with to see if they control the number of threads that spawn from a single process, but I won't be able to test them until later tonight maybe.

export OMP_NUM_THREADS=1 export BLAS_NUM_THREADS=1 export MKL_NUM_THREADS=1 export NUMEXPR_NUM_THREADS=1 export OPENBLAS_NUM_THREADS=1

EDIT: It doesn't appear that any of those changes the thread usage of SKLEARNEX. I tried some variations of SKLEARNEX_THREADS=1 and SKLEARNEX_NUM_THREADS=1, but it did not change the thread behavior. Hopefully, someone more knowledgeable will be able to answer this.

FischyM avatar Jun 02 '22 20:06 FischyM

i also face the same problem and waiting for some help. this always happen when i use SVC.

joblib.externals.loky.process_executor.TerminatedWorkerError: A worker process managed by the executor was unexpectedly terminated. This could be caused by a segmentation fault while calling the function or by an excessive memory usage causing the Operating System to kill the worker.

EDIT: I solve the problem by manually import SVC and remove patch_sklearn() from daal4py import daalinit daalinit(1)

from daal4py.sklearn.svm import SVC

plenoi avatar Jun 18 '22 00:06 plenoi

Any update here? I would like to be able to set the number of threads, as some jobs misbehave on shared resources

lilybellesweet avatar Aug 09 '22 13:08 lilybellesweet

Number of threads per SVM training/inference can be effectively limited with daalinit:

import daal4py as d4p
d4p.daalinit(1)

I checked that it works for SVM with python's multiprocessing.

Alexsandruss avatar Aug 11 '22 13:08 Alexsandruss

However, limit of threads will not solve memory issues of SVM completely, because it is experiencing memory leak, which is under investigation.

Alexsandruss avatar Aug 11 '22 14:08 Alexsandruss

I tried using daalinit (for RandomForestRegressor) and it did not work, the number of threads created was not affected.

lilybellesweet avatar Aug 12 '22 11:08 lilybellesweet

I run RandomForestRegressor and it used number of threads set by daalinit. Did you check that RandomForestRegressor was patched using verbose mode? What OS, python, scikit-learn and scikit-learn-intelex versions are you using?

Alexsandruss avatar Aug 13 '22 15:08 Alexsandruss

It's running the sklearnex version, I checked.

OS: CentOS 7.9 python 3.8.12 scikit-learn 1.1.1 scikit-learn-intelex 2021.6.0

I set d4py.daalinit(2), then do patch_sklearn(), but always get threads per process equal to number of CPUs available.

lilybellesweet avatar Aug 16 '22 10:08 lilybellesweet

I used same configuration and next script while trying to reproduce:

import logging
logging.getLogger().setLevel(logging.INFO)

from sklearnex import patch_sklearn
patch_sklearn()

from sklearn.ensemble import RandomForestRegressor
from sklearn.datasets import make_regression
import daal4py as d4p

from multiprocessing import Pool
from sys import argv


def train_rfr(data):
    x, y = data
    rfr = RandomForestRegressor()
    rfr.fit(x, y)
    print('Score:', rfr.score(x, y))


if __name__ == '__main__':
    n_threads = int(argv[1])
    n_forests = int(argv[2])

    dataset = [make_regression(n_samples=20000, n_features=128) for i in range(n_forests)]

    d4p.daalinit(n_threads)
    with Pool(n_forests) as p:
        p.map(train_rfr, dataset)

n_threads x n_forests total threads were used every time for varying parameters.

Alexsandruss avatar Aug 16 '22 21:08 Alexsandruss

Thank you for this effort! I am not sure why it is behaving like this for me, but despite using very similar code I am still having this issue where the same amount of threads are created per process as the total number of cores available, no matter how I set daalinit(). I am working on a SLURM system - could this be causing the issue?

lilybellesweet avatar Aug 22 '22 09:08 lilybellesweet

It doesn't appear to be a SLURM issue for me, as even using the same system with and without SLURM gives me an odd issue in that SVC is returning np.nan for different testing scores in sklearn's GridSearchCV. I'm wondering if it is a CPU-specific issue, because I don't have this issue on an intel CPU (Xeon E5-2630 v3), but I do on an AMD (Milan 7763). It appears that @Stack-it-up is using an intel CPU, but it's in the Core series. What CPU are you using @lilybellesweet for your SLURM system?

FischyM avatar Nov 09 '22 04:11 FischyM

However, limit of threads will not solve memory issues of SVM completely, because it is experiencing memory leak, which is under investigation.

@Alexsandruss Is there any update on the memory leak for SVM? I found one post of yours here where you say the issue is on Python side. Does that mean it cannot be fixed?

lange-martin avatar Nov 22 '22 08:11 lange-martin

However, limit of threads will not solve memory issues of SVM completely, because it is experiencing memory leak, which is under investigation.

@Alexsandruss Is there any update on the memory leak for SVM? I found one post of yours here where you say the issue is on Python side. Does that mean it cannot be fixed?

Fix for memory leak is not found yet, you can try to use SVM from daal4py.sklearn.svm namespace as temporary alternative. It is wrapper for legacy DAAL interface and memory leak is not expected here, however it might have outdated API comparing to latest sklearn versions of SVM

Alexsandruss avatar Nov 23 '22 00:11 Alexsandruss

Any update on this?

Stack-it-up avatar May 01 '23 14:05 Stack-it-up

Any update on this?

Currently - no update.

Alexsandruss avatar May 02 '23 07:05 Alexsandruss