[BUG] Circular import error with PyTorch nightly
Describe the bug Circular import error with PyTorch nightly. If I uninstall deepspeed it works fine.
Traceback (most recent call last):
File "/test/oss.py", line 322, in <module>
mp.spawn(
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 283, in spawn
return start_processes(fn, args, nprocs, join, daemon, start_method="spawn")
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 239, in start_processes
while not context.join():
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 190, in join
raise ProcessRaisedException(msg, error_index, failed_process.pid)
torch.multiprocessing.spawn.ProcessRaisedException:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/multiprocessing/spawn.py", line 77, in _wrap
fn(i, *args)
File "/test/oss.py", line 160, in train
model = DDP(model, device_ids=device_ids, find_unused_parameters=False) # type: ignore
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/nn/parallel/distributed.py", line 900, in __init__
optimize_ddp = torch._dynamo.config._get_optimize_ddp_mode()
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/__init__.py", line 2561, in __getattr__
return importlib.import_module(f".{name}", __name__)
File "/opt/conda/envs/ptca/lib/python3.10/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/_dynamo/__init__.py", line 3, in <module>
from . import convert_frame, eval_frame, resume_execution
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/_dynamo/convert_frame.py", line 50, in <module>
from . import config, exc, trace_rules
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/_dynamo/trace_rules.py", line 45, in <module>
from .variables import (
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/_dynamo/variables/__init__.py", line 96, in <module>
from .torch import TorchCtxManagerClassVariable, TorchInGraphFunctionVariable
File "/opt/conda/envs/ptca/lib/python3.10/site-packages/torch/_dynamo/variables/torch.py", line 137, in <module>
torch._dynamo.external_utils.is_compiling: True,
AttributeError: partially initialized module 'torch._dynamo' has no attribute 'external_utils' (most likely due to a circular import)
To Reproduce Steps to reproduce the behavior:
- Install Pytorch nightly
- Install Deepspeed
- Clone fairscale repo: https://github.com/facebookresearch/fairscale
- cd benchmarks/oss.py
- Run python oss.py
Expected behavior The script should run without any errors.
ds_report output
DeepSpeed C++/CUDA extension op report
NOTE: Ops not installed will be just-in-time (JIT) compiled at runtime if needed. Op compatibility means that your system meet the required dependencies to JIT install the op.
JIT compiled ops requires ninja ninja .................. [OKAY]
op name ................ installed .. compatible
async_io ............... [NO] ....... [OKAY] fused_adam ............. [NO] ....... [OKAY] cpu_adam ............... [NO] ....... [OKAY] cpu_adagrad ............ [NO] ....... [OKAY] cpu_lion ............... [NO] ....... [OKAY] [WARNING] Please specify the CUTLASS repo directory as environment variable $CUTLASS_PATH evoformer_attn ......... [NO] ....... [NO] [WARNING] NVIDIA Inference is only supported on Ampere and newer architectures [WARNING] FP Quantizer is using an untested triton version (3.0.0+dedb7bdf33), only 2.3.0 and 2.3.1 are known to be compatible with these kernels fp_quantizer ........... [NO] ....... [NO] fused_lamb ............. [NO] ....... [OKAY] fused_lion ............. [NO] ....... [OKAY] inference_core_ops ..... [NO] ....... [OKAY] cutlass_ops ............ [NO] ....... [OKAY] transformer_inference .. [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] ragged_device_ops ...... [NO] ....... [OKAY] ragged_ops ............. [NO] ....... [OKAY] random_ltd ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.5 [WARNING] using untested triton version (3.0.0+dedb7bdf33), only 1.0.0 is known to be compatible sparse_attn ............ [NO] ....... [NO] spatial_inference ...... [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY]
DeepSpeed general environment info: torch install path ............... ['/opt/conda/envs/ptca/lib/python3.10/site-packages/torch'] torch version .................... 2.5.0.dev20240815+cu118 deepspeed install path ........... ['/opt/conda/envs/ptca/lib/python3.10/site-packages/deepspeed'] deepspeed info ................... 0.14.5, unknown, unknown torch cuda version ............... 11.8 torch hip version ................ None nvcc version ..................... 11.8 deepspeed wheel compiled w. ...... torch 2.5, cuda 11.8 shared memory (/dev/shm) size .... 330.54 GB
Screenshots If applicable, add screenshots to help explain your problem.
System info (please complete the following information):
- OS: Ubuntu 20.04
- GPU count and types: 1 Node, with 8 V100
- Interconnects (if applicable): No
- Python version: 3.10.14
- Any other relevant info about your setup
Launcher context
Are you launching your experiment with the deepspeed launcher, MPI, or something else?
No
Docker context Are you using a specific docker image that you can share?
Additional context Add any other context about the problem here.
Hi @ajindal1 - I am trying to repro this but hitting an issue.
First I wanted to confirm that in a venv, if you following the following steps that you hit no issues?
- Install Pytorch nightly
- Clone fairscale repo and cd into it
- pip install -r requirements.txt
- pip install -r requirements-benchmark.txt
- cd benchmarks
- python oss.py
Doing this, I'm still hitting a ModuleNotFoundError: No module named 'benchmarks' issue I'll investigate, but I wanted to know if there was a reason that you thought to uninstall DeepSpeed?
Also sharing my pip list here:
Package Version
------------------------ ------------------------
attrs 24.2.0
black 22.3.0
certifi 2024.7.4
cffi 1.17.0
cfgv 3.4.0
charset-normalizer 3.3.2
click 8.1.7
coverage 7.6.1
distlib 0.3.8
docutils 0.17
fairscale 0.4.13
filelock 3.13.1
flake8 4.0.1
flake8-annotations 2.7.0
fsspec 2024.6.1
identify 2.6.0
idna 3.7
iniconfig 2.0.0
isort 5.10.1
Jinja2 3.1.4
joblib 1.4.2
MarkupSafe 2.1.5
mccabe 0.6.1
mpmath 1.3.0
mypy 0.910
mypy_extensions 0.4.4
networkx 3.3
nodeenv 1.9.1
numpy 1.22.0
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
packaging 24.1
parameterized 0.9.0
pathspec 0.12.1
pgzip 0.3.1
Pillow 9.3.0
pip 24.0
platformdirs 4.2.2
pluggy 1.5.0
pre-commit 3.8.0
py 1.11.0
pycodestyle 2.8.0
pycparser 2.22
pyflakes 2.4.0
pygit2 1.11.1
pynvml 8.0.4
pytest 7.0.0
pytest-cov 3.0.0
pytest-timeout 2.1.0
pytorch-triton 3.0.0+dedb7bdf33
PyYAML 6.0.2
remote-pdb 2.1.0
requests 2.32.3
scikit-learn 1.1.3
scipy 1.11.4
sentencepiece 0.2.0
setuptools 65.5.0
six 1.16.0
sympy 1.13.1
threadpoolctl 3.5.0
timm 0.3.4
toml 0.10.2
tomli 2.0.1
torch 2.5.0.dev20240822+cu124
torchaudio 2.4.0.dev20240822+cu124
torchtext 0.6.0
torchvision 0.20.0.dev20240822+cu124
tqdm 4.66.5
typing_extensions 4.12.2
urllib3 2.2.2
virtualenv 20.26.3
@loadams thanks for looking into it. So, the error only occurs when both deepspeed and onnxruntime-training are installed. The reason I suspected that removing deepspeed can help is because of my past experiences, for example this. It could be an issue with PyTorch as well as we found out in that issue.
To reproduce the error:
# Use docker image: nvcr.io/nvidia/pytorch:24.06-py3
# Uninstall current PyTorch and install nightly Pytorch
# Install deepspeed and onnxruntime-training
# Install fairscale
# Clone fairscale repo: https://github.com/facebookresearch/fairscale
cd fairscale
python benchmarks/oss.py
Just so you know, I got the same issue completely unrelated to DeepSpeed (I. didn't know the package until I came here).
I think the culprit might be pytest-timeout, since my AttributeError: partially initialized module 'torch._dynamo' has no attribute 'external_utils' (most likely due to a circular import) only arose after I started using it in my tests...!
Also, I have absolutely no idea how to fix it :)
Interesting, especially that you have this issue even if you don't have deepspeed installed.
Same error in https://github.com/MaartenGr/BERTopic/issues/2255, and there's also several issues in PyTorch for this:
- https://github.com/pytorch/pytorch/issues/92758
- https://github.com/pytorch/pytorch/issues/111896
I might say this is not a DeepSpeed issue, @ajindal1 is the issue still happening for you?
@jamesbraza I think it is just a circular import error based on the packages you might have. You can try removing some packages which are not necessary for you at the moment. I tried reproducing the issue with the below packages and it worked fine for me.
torch 2.8.0.dev20250325+cu118
torchvision 0.22.0.dev20250325+cu118
deepspeed 0.15.1
onnxruntime-training 1.19.2
I can also recommend you to use our docker images to avoid any setup issue.
mcr.microsoft.com/aifx/acpt/stable-ubuntu2204-cu118-py310-torch260:latest
mcr.microsoft.com/aifx/acpt/stable-ubuntu2204-cu124-py310-torch260:latest
mcr.microsoft.com/aifx/acpt/stable-ubuntu2204-cu126-py310-torch260:latest
I follow @ajindal1, thanks. Do you think it's okay to close this issue then? I think this issue is not actually a DeepSpeed issue
Yes, closing the issue.