[Bug] RuntimeError: RuntimeError"index_reduce_func_cuda_exclude_input_init" not implemented for 'Long':
Prerequisite
- [X] I have searched Issues and Discussions but cannot get the expected help.
- [X] I have read the FAQ documentation but cannot get the expected help.
- [X] The bug has not been fixed in the latest version (master) or latest version (1.x).
Task
I'm using the official example scripts/configs for the officially supported tasks/models/datasets.
Branch
1.x branch https://github.com/open-mmlab/mmrotate/tree/1.x
Environment
sys.platform: linux Python: 3.8.18 (default, Sep 11 2023, 13:40:15) [GCC 11.2.0] CUDA available: True numpy_random_seed: 2147483648 GPU 0,1,2,3,4,5,6,7: Tesla V100-SXM2-32GB CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 10.1, V10.1.10 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.12.1 PyTorch compiling details: PyTorch built with:
- GCC 9.3
- C++ Version: 201402
- Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- LAPACK is enabled (usually provided by MKL)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 11.3
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
- CuDNN 8.3.2 (built against CUDA 11.5)
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.3.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,
TorchVision: 0.13.1 OpenCV: 4.8.0 MMEngine: 0.8.4 MMRotate: 1.0.0rc1+
Reproduces the problem - code sample
python train.py
Reproduces the problem - command or script
when i run h2rboxv2 RuntimeError: RuntimeError"index_reduce_func_cuda_exclude_input_init" not implemented for 'Long': "index_reduce_func_cuda_exclude_input_init" not implemented for 'Long' ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 78423) of binary: /home/fuchenlin/anaconda3/envs/mmr1x/bin/python Traceback (most recent call last):
How can i fix it??
Reproduces the problem - error message
09/20 15:59:13 - mmengine - INFO - Checkpoints will be saved to /data/fuchenlin/mmrotate-dev-1.x/work_dirs/h2rbox_v2-le90_r50_fpn-1x_dota.
/data/fuchenlin/mmrotate-dev-1.x/mmrotate/structures/bbox/rotated_boxes.py:192: UserWarning: The clip function does nothing in RotatedBoxes.
warnings.warn('The clip function does nothing in RotatedBoxes.')
/data/fuchenlin/mmrotate-dev-1.x/mmrotate/structures/bbox/rotated_boxes.py:192: UserWarning: The clip function does nothing in RotatedBoxes.
warnings.warn('The clip function does nothing in RotatedBoxes.')
/data/fuchenlin/mmrotate-dev-1.x/mmrotate/structures/bbox/rotated_boxes.py:192: UserWarning: The clip function does nothing in RotatedBoxes.
warnings.warn('The clip function does nothing in RotatedBoxes.')
/data/fuchenlin/mmrotate-dev-1.x/mmrotate/structures/bbox/rotated_boxes.py:192: UserWarning: The clip function does nothing in RotatedBoxes.
warnings.warn('The clip function does nothing in RotatedBoxes.')
/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1659484810403/work/aten/src/ATen/native/TensorShape.cpp:2894.)
return VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/functional.py:478: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at /opt/conda/conda-bld/pytorch_1659484810403/work/aten/src/ATen/native/TensorShape.cpp:2894.)
return VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
/data/fuchenlin/mmrotate-dev-1.x/mmrotate/models/dense_heads/h2rbox_v2_head.py:320: UserWarning: index_reduce() is in beta and the API may change at any time. (Triggered internally at /opt/conda/conda-bld/pytorch_1659484810403/work/aten/src/ATen/native/cuda/Indexing.cu:880.)
compacted_bid_targets = torch.empty_like(bid).index_reduce(
/data/fuchenlin/mmrotate-dev-1.x/mmrotate/models/dense_heads/h2rbox_v2_head.py:320: UserWarning: index_reduce() is in beta and the API may change at any time. (Triggered internally at /opt/conda/conda-bld/pytorch_1659484810403/work/aten/src/ATen/native/cuda/Indexing.cu:880.)
compacted_bid_targets = torch.empty_like(bid).index_reduce(
Traceback (most recent call last):
File "./tools/train.py", line 125, in
main()
File "./tools/train.py", line 121, in main
runner.train()
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1745, in train
model = self.train_loop.run() # type: ignore
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmengine/runner/loops.py", line 96, in run
self.run_epoch()
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmengine/runner/loops.py", line 112, in run_epoch
self.run_iter(idx, data_batch)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmengine/runner/loops.py", line 128, in run_iter
outputs = self.runner.model.train_step(
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmengine/model/wrappers/distributed.py", line 121, in train_step
losses = self._run_forward(data, mode='loss')
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmengine/model/wrappers/distributed.py", line 161, in _run_forward
results = self(**data, mode=mode)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1008, in forward
Traceback (most recent call last):
File "./tools/train.py", line 125, in
output = self._run_ddp_forward(*inputs, **kwargs)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 969, in _run_ddp_forward
main()
File "./tools/train.py", line 121, in main
return module_to_run(*inputs[0], **kwargs[0])
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
runner.train()
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmengine/runner/runner.py", line 1745, in train
return forward_call(*input, **kwargs)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmdet/models/detectors/base.py", line 92, in forward
return self.loss(inputs, data_samples)
File "/data/fuchenlin/mmrotate-dev-1.x/mmrotate/models/detectors/h2rbox_v2.py", line 183, in loss
losses = self.bbox_head.loss(feat, batch_data_samples_all)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmdet/models/dense_heads/base_dense_head.py", line 123, in loss
model = self.train_loop.run() # type: ignore
losses = self.loss_by_feat(*loss_inputs)
File "/data/fuchenlin/mmrotate-dev-1.x/mmrotate/models/dense_heads/h2rbox_v2_head.py", line 346, in loss_by_feat
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmengine/runner/loops.py", line 96, in run
self.run_epoch()
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmengine/runner/loops.py", line 112, in run_epoch
compacted_labels = torch.empty(
RuntimeError: "index_reduce_func_cuda_exclude_input_init" not implemented for 'Long'
self.run_iter(idx, data_batch)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmengine/runner/loops.py", line 128, in run_iter
outputs = self.runner.model.train_step(
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmengine/model/wrappers/distributed.py", line 121, in train_step
losses = self._run_forward(data, mode='loss')
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmengine/model/wrappers/distributed.py", line 161, in _run_forward
results = self(**data, mode=mode)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 1008, in forward
output = self._run_ddp_forward(*inputs, **kwargs)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 969, in _run_ddp_forward
return module_to_run(*inputs[0], **kwargs[0])
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmdet/models/detectors/base.py", line 92, in forward
return self.loss(inputs, data_samples)
File "/data/fuchenlin/mmrotate-dev-1.x/mmrotate/models/detectors/h2rbox_v2.py", line 183, in loss
losses = self.bbox_head.loss(feat, batch_data_samples_all)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/mmdet/models/dense_heads/base_dense_head.py", line 123, in loss
losses = self.loss_by_feat(*loss_inputs)
File "/data/fuchenlin/mmrotate-dev-1.x/mmrotate/models/dense_heads/h2rbox_v2_head.py", line 346, in loss_by_feat
compacted_labels = torch.empty(
RuntimeError: "index_reduce_func_cuda_exclude_input_init" not implemented for 'Long'
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 12470) of binary: /home/fuchenlin/anaconda3/envs/mmr1x/bin/python
Traceback (most recent call last):
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in
main()
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/distributed/run.py", line 752, in run
elastic_launch(
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/fuchenlin/anaconda3/envs/mmr1x/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
./tools/train.py FAILED
Failures: [1]: time : 2023-09-20_15:59:25 host : dgx-77 rank : 1 (local_rank: 1) exitcode : 1 (pid: 12471) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
Root Cause (first observed failure): [0]: time : 2023-09-20_15:59:25 host : dgx-77 rank : 0 (local_rank: 0) exitcode : 1 (pid: 12470) error_file: <N/A> traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
Additional information
No response