Threading backend using `omp_set_nested` which is deprecated in OpenMP 5?
Reporting a bug
- [x] I am using the latest released version of Numba (most recent is visible in the change log (https://github.com/numba/numba/blob/master/CHANGE_LOG).
- [x] I have included below a minimal working reproducer (if you are unsure how to write one see http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports). This
from numba import njit, prange
import numpy as np
@njit(parallel=True)
def foo(n):
acc = 0
for i in prange(n):
acc += i
return acc
print(foo(10))
run like:
NUMBA_THREADING_LAYER=omp python example.py
does this:
OMP: Info #273: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
Think this is because OpenMP 5.0 deprecated omp_set_nested and Anaconda MKL 2020 packages depends on in Intel-OpenMP which is 5.0 compliant.
Am I correct to assume that this is only a warning? I am seeing this quite often but it looks like there is nothing that the end user can do to resolve this. Is that right? If so, is there a recommended way to suppress this warning?
I've tried the typical (which ignores other warnings as well - 😢 ):
import warnings
warnings.filterwarnings("ignore")
but it still persists. Any suggestions would be greatly appreciated!
@seanlaw You're right, it is a warning only. I looked in the past at fixing this by only using the non-deprecated API on newer OpenMPs and using the old one on older OpenMP versions, but I found there are some implementations that aren't compliant with the spec around this area (I can't recall the details and might have discussed it outside of Github issues unfortunately) so it's very hard to detect the right thing to do whilst supporting various different implementations on different platforms.
The way to suppress it (if it is possible) will depend on your OpenMP implementation (e.g. setting the environment variable KMP_WARNINGS=0 or somehow calling kmp_set_warnings_off() for Intel OpenMP). Does this help in your circumstances?
The way to suppress it (if it is possible) will depend on your OpenMP implementation (e.g. setting the environment variable
KMP_WARNINGS=0or somehow callingkmp_set_warnings_off()for Intel OpenMP). Does this help in your circumstances?
@gmarkall So, this is coming up frequently with many of our STUMPY users who are new to Python and who may not realize that it is nothing to be too concerned with and nothing for them to do. This is why I'm trying to look for a way to hide this specific warning from all users who install STUMPY but without hiding other important warnings. Does that make sense?
Ah, I found the PR I tried: https://github.com/numba/numba/pull/7511
@gmarkall So, this is coming up frequently with many of our STUMPY users who are new to Python and who may not realize that it is nothing to be too concerned with and nothing for them to do. This is why I'm trying to look for a way to hide this specific warning from all users who install STUMPY but without hiding other important warnings. Does that make sense?
This does make sense. I think one workaround could be not to use the OpenMP backend - you could set the backend using the options documented at https://numba.readthedocs.io/en/latest/reference/envvars.html#threading-control.
This does make sense. I think one workaround could be not to use the OpenMP backend - you could set the backend using the options documented at https://numba.readthedocs.io/en/latest/reference/envvars.html#threading-control.
Thank you for your input. In addition to the OpenMP warning, users are also getting this TBB warning so, in order to avoid both warnings, does that mean that I'm basically left with setting the NUMBA_THREADING_LAYER=workqueue? Perhaps, this seems limiting/undesirable? Is a workqueue backend always available? This is well beyond the limits of my understanding 😄
I certainly want all users to be able to leverage the fastest backend available to them without needing to ask/force them to install additional software. This is why I'd rather catch these two specific warnings rather than turn off the backends for all users but I understand/ackowledge that this stuff is HARD!
Thank you for your input. In addition to the OpenMP warning, users are also getting this TBB warning so, in order to avoid both warnings, does that mean that I'm basically left with setting the
NUMBA_THREADING_LAYER=workqueue? Perhaps, this seems limiting/undesirable? Is aworkqueuebackend always available? This is well beyond the limits of my understanding smile
Workqueue is always available as it is part of Numba that only relies on the underlying OS's threading layer (POSIX threads or Windows threads). However, it's not threadsafe (you can't use multiple threads of your own safely with it) so it might be limiting for your use case. I don't know how well it performs compared to the other backends, but I'd be surprised if it was as good in all cases as OpenMP or TBB.
I certainly want all users to be able to leverage the fastest backend available to them without needing to ask/force them to install additional software. This is why I'd rather catch these two specific warnings rather than turn off the backends for all users but I understand/ackowledge that this stuff is HARD!
If you can figure out a way to reliably replace this call:
https://github.com/numba/numba/blob/0994f97c33a19d7684471cc98b8c36573762204b/numba/np/ufunc/omppool.cpp#L227
with a call to omp_set_max_active_levels when it's available, then that would solve this warning. However, It seems that there's no good way to determine the OpenMP version of the implementation at runtime though, so it's not straightforward to implement. I think the thing holding us back from just unilaterally replacing the call is support for older OS Xs that include an old version of OpenMP.
@stuartarchibald Maybe we could make it a requirement to have an OpenMP 3.0 or later for the OMP backend, and use omp_set_max_active_levels? This is a spec from May 2008, so it doesn't seem like an egregious requirement.
Ohh, does this mean that this warning is coming from the CPP layer and therefore we can't catch/filter it in the Python layer? At the end of the day, I'm not trying to stop the warning from happening. Instead, I just want to filter it so that the end user doesn't see it.
Ohh, does this mean that this warning is coming from the CPP layer and therefore we can't catch/filter it in the Python layer? At the end of the day, I'm not trying to stop the warning from happening. Instead, I just want to filter it so that the end user doesn't see it.
That's correct, it's the underlying OpenMP implementation, whatever that is (it could be written in Fortran or anything else really).
@stuartarchibald Maybe we could make it a requirement to have an OpenMP 3.0 or later for the OMP backend, and use omp_set_max_active_levels? This is a spec from May 2008, so it doesn't seem like an egregious requirement.
I think this is probably ok:
- TBB is now widely available as an alternative if nested parallelism is required.
- OpenMP 3.0 is from 2008, obviously there is lag between specification and implementation but I'd hope that it's now widely available. The thread masking work https://github.com/numba/numba/pull/4615 introduced this change, IIRC I wrote the patch that introduced it but don't recall why I used the
omp_set_nestedahead of theomp_set_max_active_levelsbut suspect older OpenMP versions on e.g. OSX or 32bit ARM/being generally conservative about API versions.
@gmarkall I suggest we go with patching up the OMP backend to use the OpenMP 3 or later API and seeing what (if anything) breaks on the farm. The use of true nested behaviour is quite rare and so it may well be that the impact of this change is relatively small anyway. Further, the threading layers "safe load" and will fall back to other implementations/suggest how to create a fall back if there's a problem, which should make it reasonably easy for users with 2.5 or incomplete 3.x+ to work out an alternative. I do recall one issue was that the binding between OpenMP version spec and conda package name for the library wasn't particularly clear (I'd really like to be able to constrain with something like openmp_version >= 3 in a meta.yaml), but this is a separate problem.
@gmarkall I suggest we go with patching up the OMP backend to use the OpenMP 3 or later API and seeing what (if anything) breaks on the farm.
OK, here's an attempt: https://github.com/numba/numba/pull/7705 :-)
Ohh, does this mean that this warning is coming from the CPP layer and therefore we can't catch/filter it in the Python layer? At the end of the day, I'm not trying to stop the warning from happening. Instead, I just want to filter it so that the end user doesn't see it.
In case it matters, I found that I was able to suppress the warnings with:
import os
os.environ['KMP_WARNINGS'] = 'off'
import numpy as np
from numba import prange, njit
@njit(parallel=True)
def go_fast(a): # Function is compiled and runs in machine code
trace = 0.0
for i in prange(a.shape[0]):
trace += np.tanh(a[i, i])
return a + trace
x = np.arange(100).reshape(10, 10)
print(go_fast(x))
Note that you must set os.environ['KMP_WARNINGS'] = 'off' BEFORE importing numpy. Otherwise, you will still see
OMP: Info #273: omp_set_nested routine deprecated, please use omp_set_max_active_levels instead.
@seanlaw, So warning is gone by using what you suggested but this still crushes python/jupyter kernel on M1 Mac mini running the latest 12.5.