detrex
detrex copied to clipboard
DINO inference on a CPU only machine fails
Hi,
Always end up with this common error Cannot import detrex._C', therefore 'MultiScaleDeformableAttention' is not available.
Inference script with train.device
and model.device = "cpu"
works but it still requires a Cuda dependent/enabled machine.
Is there a way to bypass this? Is there a way to deploy/run DINO on a CPU only machine/docker?
Many thanks!
Same problem when running on MacBook that has no GPU.
ImportError: Cannot import 'detrex._C', therefore 'MultiScaleDeformableAttention' is not available. detrex is not compiled successfully, please build following the instructions!
Same problem when running on Mac.
ImportError: Cannot import 'detrex._C', therefore 'MultiScaleDeformableAttention' is not available. detrex is not compiled successfully, please build following the instructions!
Please check if you have set cuda runtime during installation, the MultiscaleDeformableAttention
operator need to be built under cuda runtime, You can check this issue for the solution: https://github.com/IDEA-Research/detrex/issues/109
And the CPU inference problem will be checked later~
Please check if you have set cuda runtime during installation, the
MultiscaleDeformableAttention
operator need to be built under cuda runtime, You can check this issue for the solution: #109And the CPU inference problem will be checked later~
Thanks for your reply and yes, I totally understand the cuda runtime problem. I have trained a DINO model on a custom dataset and now trying to run it on a CPU-only machine. So now stuck on this error as there is no GPU available so cannot install CUDA Toolkit and was wondering if there is a way around it? I guess this is not possible currently?
Please check if you have set cuda runtime during installation, the
MultiscaleDeformableAttention
operator need to be built under cuda runtime, You can check this issue for the solution: #109 And the CPU inference problem will be checked later~Thanks for your reply and yes, I totally understand the cuda runtime problem. I have trained a DINO model on a custom dataset and now trying to run it on a CPU-only machine. So now stuck on this error as there is no GPU available so cannot install CUDA Toolkit and was wondering if there is a way around it? I guess this is not possible currently?
Ok, I've totally understand your problem, we will check this problem later
Please check if you have set cuda runtime during installation, the
MultiscaleDeformableAttention
operator need to be built under cuda runtime, You can check this issue for the solution: #109 And the CPU inference problem will be checked later~Thanks for your reply and yes, I totally understand the cuda runtime problem. I have trained a DINO model on a custom dataset and now trying to run it on a CPU-only machine. So now stuck on this error as there is no GPU available so cannot install CUDA Toolkit and was wondering if there is a way around it? I guess this is not possible currently?
Ok, I've totally understand your problem, we will check this problem later
Thank you for releasing such a nice project.
I have the same problem listed here and I understand that the only way to get around it at this point is to have CUDA runtime installed while building the project. This however, makes this project really hard to deploy where only CPU instances are available.
To get around this, I tried to build a docker container with cuda runtime available on a GPU supported machine and then deploy it. However, this leads to a massive sized docker container due to the CUDA binaries and is hard to deploy in our setting.
I see that you have listed a python package and docker as high priorities, but I would like to suggest to make this bug a higher priority so that the python package/docker can also be seamlessly used in CPU only environments rather than being needlessly dependent on the CUDA binaries.
Thanks!
I am also running in the issue pointed out by @var316 and agree with @nolancardozo13 that it will really help in having this bug solved before the python package and docker. Thanks!
Just delete the following lines, and set train.device = "cpu" model.device = train.device
. It works for me.
#try:
# from detrex import _C
#except ImportError:
# TODO: register ops natively so there is no need to import _C.
# _msg = "detrex is not compiled successfully, please build following the instructions!"
# _args = ("detrex._C", _msg)
# MultiScaleDeformableAttention = create_dummy_class( # noqa
# "MultiScaleDeformableAttention", *_args
# )
Just delete the following lines, and set
train.device = "cpu" model.device = train.device
. It works for me.#try: # from detrex import _C #except ImportError: # TODO: register ops natively so there is no need to import _C. # _msg = "detrex is not compiled successfully, please build following the instructions!" # _args = ("detrex._C", _msg) # MultiScaleDeformableAttention = create_dummy_class( # noqa # "MultiScaleDeformableAttention", *_args # )
Hi @powermano ,
Thanks! I just verified that this indeed works. Do you believe that this has any implications on the model output during inference?
Just delete the following lines, and set
train.device = "cpu" model.device = train.device
. It works for me.#try: # from detrex import _C #except ImportError: # TODO: register ops natively so there is no need to import _C. # _msg = "detrex is not compiled successfully, please build following the instructions!" # _args = ("detrex._C", _msg) # MultiScaleDeformableAttention = create_dummy_class( # noqa # "MultiScaleDeformableAttention", *_args # )
Hi @powermano ,
Thanks! I just verified that this indeed works. Do you believe that this has any implications on the model output during inference?
I trained DINO with my own dataset and the results are correct. But i do not compare the cpu and gpu results. I will test it.
hey @powermano, do you mind sharing where did you comment it out? what file & lines?
Just delete the following lines, and set
train.device = "cpu" model.device = train.device
. It works for me.#try: # from detrex import _C #except ImportError: # TODO: register ops natively so there is no need to import _C. # _msg = "detrex is not compiled successfully, please build following the instructions!" # _args = ("detrex._C", _msg) # MultiScaleDeformableAttention = create_dummy_class( # noqa # "MultiScaleDeformableAttention", *_args # )
hey @powermano, do you mind sharing where did you comment it out? what file & lines?
Just delete the following lines, and set
train.device = "cpu" model.device = train.device
. It works for me.#try: # from detrex import _C #except ImportError: # TODO: register ops natively so there is no need to import _C. # _msg = "detrex is not compiled successfully, please build following the instructions!" # _args = ("detrex._C", _msg) # MultiScaleDeformableAttention = create_dummy_class( # noqa # "MultiScaleDeformableAttention", *_args # )
comment the file: https://github.com/IDEA-Research/detrex/blob/main/detrex/layers/multi_scale_deform_attn.py#:~:text=return%20_dummy-,try%3A,),-Give%20feedback
Then taking DN-DETR as example, set `train.device = "cpu" https://github.com/IDEA-Research/detrex/blob/main/projects/dn_detr/configs/dn_detr_r50_50ep.py#:~:text=train.device%20%3D%20%22cuda%22
I've also tested all the eva-02-vitdet-dino models with this fix and they don't seem to be affected. @rentainhe should this be merged to main? We can keep the detrex._C
import for the gpu users.
try:
from detrex import _C
except ImportError:
pass