mmtracking icon indicating copy to clipboard operation
mmtracking copied to clipboard

Some tensor is not at the same device with other tensor.

Open thaiph99 opened this issue 2 years ago • 0 comments

Thanks for your error report and we appreciate it a lot.

Checklist

  1. I have searched related issues but cannot get the expected help.
  2. The bug has not been fixed in the latest version (dev-1.x).

Describe the bug Some tensor is not at the same device with other tensor.

Reproduction

  1. What command or script did you run?
CUDA_VISIBLE_DEVICES=0 python tools/train.py configs/reid/reid_r50_8xb32-6e_mot17train80_test-mot17val20.py
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
  • I don't have modifications on the code and config
  1. What dataset did you use and what task did you run?
  • I use MOT17 dataset

Environment

  1. Please run python mmtrack/utils/collect_env.py to collect necessary environment information and paste it here.
sys.platform: linux
Python: 3.9.17 (main, Jul  5 2023, 20:41:20) [GCC 11.2.0]
CUDA available: True
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce GTX 1080 Ti
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.2, V12.2.128
GCC: gcc (Ubuntu 9.5.0-1ubuntu1~22.04) 9.5.0
PyTorch: 1.11.0
PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.5.2 (Git Hash a9302535553c73243c632ad3c4c80beec3d19a1e)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.3
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
  - CuDNN 8.2
  - Magma 2.5.2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.11.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 

TorchVision: 0.12.0
OpenCV: 4.8.0
MMEngine: 0.8.4
MMCV: 2.0.0rc4
MMCV Compiler: GCC 9.3
MMCV CUDA Compiler: 11.3
MMTracking: 1.0.0rc1+256cf73

  1. You may add addition that may be helpful for locating the problem, such as
    • How you installed PyTorch [e.g., pip, conda, source]
    • Other environment variables that may be related (such as $PATH, $LD_LIBRARY_PATH, $PYTHONPATH, etc.)

Error traceback If applicable, paste the error trackback here.

08/26 10:46:02 - mmengine - INFO - Checkpoints will be saved to /home/thaipham/horus/mmtracking/work_dirs/reid_r50_8xb32-6e_mot17train80_test-mot17val20.
Traceback (most recent call last):
  File "/home/thaipham/horus/mmtracking/tools/train.py", line 119, in <module>
    main()
  File "/home/thaipham/horus/mmtracking/tools/train.py", line 115, in main
    runner.train()
  File "/home/thaipham/anaconda3/envs/mmtracking1x/lib/python3.9/site-packages/mmengine/runner/runner.py", line 1745, in train
    model = self.train_loop.run()  # type: ignore
  File "/home/thaipham/anaconda3/envs/mmtracking1x/lib/python3.9/site-packages/mmengine/runner/loops.py", line 96, in run
    self.run_epoch()
  File "/home/thaipham/anaconda3/envs/mmtracking1x/lib/python3.9/site-packages/mmengine/runner/loops.py", line 112, in run_epoch
    self.run_iter(idx, data_batch)
  File "/home/thaipham/anaconda3/envs/mmtracking1x/lib/python3.9/site-packages/mmengine/runner/loops.py", line 128, in run_iter
    outputs = self.runner.model.train_step(
  File "/home/thaipham/anaconda3/envs/mmtracking1x/lib/python3.9/site-packages/mmengine/model/base_model/base_model.py", line 114, in train_step
    losses = self._run_forward(data, mode='loss')  # type: ignore
  File "/home/thaipham/anaconda3/envs/mmtracking1x/lib/python3.9/site-packages/mmengine/model/base_model/base_model.py", line 340, in _run_forward
    results = self(**data, mode=mode)
  File "/home/thaipham/anaconda3/envs/mmtracking1x/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/thaipham/horus/mmtracking/mmtrack/models/reid/base_reid.py", line 52, in forward
    return super().forward(inputs, data_samples, mode)
  File "/home/thaipham/anaconda3/envs/mmtracking1x/lib/python3.9/site-packages/mmcls/models/classifiers/image.py", line 114, in forward
    return self.loss(inputs, data_samples)
  File "/home/thaipham/anaconda3/envs/mmtracking1x/lib/python3.9/site-packages/mmcls/models/classifiers/image.py", line 224, in loss
    return self.head.loss(feats, data_samples)
  File "/home/thaipham/horus/mmtracking/mmtrack/models/reid/linear_reid_head.py", line 127, in loss
    losses = self.loss_by_feat(feats, data_samples)
  File "/home/thaipham/horus/mmtracking/mmtrack/models/reid/linear_reid_head.py", line 147, in loss_by_feat
    losses['ce_loss'] = self.loss_cls(cls_score, gt_label)
  File "/home/thaipham/anaconda3/envs/mmtracking1x/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/thaipham/anaconda3/envs/mmtracking1x/lib/python3.9/site-packages/mmcls/models/losses/cross_entropy_loss.py", line 201, in forward
    loss_cls = self.loss_weight * self.cls_criterion(
  File "/home/thaipham/anaconda3/envs/mmtracking1x/lib/python3.9/site-packages/mmcls/models/losses/cross_entropy_loss.py", line 32, in cross_entropy
    loss = F.cross_entropy(pred, label, weight=class_weight, reduction='none')
  File "/home/thaipham/anaconda3/envs/mmtracking1x/lib/python3.9/site-packages/torch/nn/functional.py", line 2996, in cross_entropy
    return torch._C._nn.cross_entropy_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index, label_smoothing)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument target in method wrapper_nll_loss_forward)

Bug fix I tried fix it in (#900)

thaiph99 avatar Aug 26 '23 04:08 thaiph99