mmsegmentation
mmsegmentation copied to clipboard
Albumentation augmentation doesn't work for LEVIRCD dataset
Thanks for your error report and we appreciate it a lot.
Checklist
- I have searched related issues but cannot get the expected help.
- The bug has not been fixed in the latest version.
Describe the bug The albu augmentations throw error for LEVIRCDDataset which is a change detection dataset with two keys for two images: img and img2
Reproduction
-
What command or script did you run?
python tools/train.py configs/swin/Levir_CD.py
-
Did you make any modifications on the code or config? Did you understand what you have modified?
-
What dataset did you use? LevirCD
Environment
- Please run
python mmseg/utils/collect_env.py
to collect necessary environment information and paste it here. sys.platform: linux Python: 3.8.18 (default, Sep 11 2023, 13:20:55) [GCC 11.2.0] CUDA available: True numpy_random_seed: 2147483648 GPU 0: Quadro K2200 CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 11.8, V11.8.89 GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0 PyTorch: 1.9.0 PyTorch compiling details: PyTorch built with:
- GCC 7.3
- C++ Version: 201402
- Intel(R) oneAPI Math Kernel Library Version 2023.1-Product Build 20230303 for Intel(R) 64 architecture applications
- Intel(R) MKL-DNN v2.1.2 (Git Hash 98be7e8afa711dc9b66c8ff3504129cb82013cdb)
- OpenMP 201511 (a.k.a. OpenMP 4.5)
- NNPACK is enabled
- CPU capability usage: AVX2
- CUDA Runtime 10.2
- NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_37,code=compute_37
- CuDNN 7.6.5
- Magma 2.5.2
- Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=10.2, CUDNN_VERSION=7.6.5, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.9.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,
OpenCV: 4.10.0 MMEngine: 0.5.0 MMSegmentation: 1.2.2+c685fe6
- You may add addition that may be helpful for locating the problem, such as
- How you installed PyTorch [e.g., pip, conda, source]
- Other environment variables that may be related (such as
$PATH
,$LD_LIBRARY_PATH
,$PYTHONPATH
, etc.)
Error traceback
If applicable, paste the error trackback here.
Traceback (most recent call last):
File "/home/mutr_gu/anaconda3/envs/mmselfsup_23feb/lib/python3.8/site-packages/mmengine/runner/loops.py", line 158, in __next__
data = next(self._iterator)
File "/home/mutr_gu/anaconda3/envs/mmselfsup_23feb/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 521, in __next__
data = self._next_data()
File "/home/mutr_gu/anaconda3/envs/mmselfsup_23feb/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1203, in _next_data
return self._process_data(data)
File "/home/mutr_gu/anaconda3/envs/mmselfsup_23feb/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1229, in _process_data
data.reraise()
File "/home/mutr_gu/anaconda3/envs/mmselfsup_23feb/lib/python3.8/site-packages/torch/_utils.py", line 425, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/mutr_gu/anaconda3/envs/mmselfsup_23feb/lib/python3.8/site-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/home/mutr_gu/anaconda3/envs/mmselfsup_23feb/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/mutr_gu/anaconda3/envs/mmselfsup_23feb/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/mutr_gu/anaconda3/envs/mmselfsup_23feb/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 408, in __getitem__
data = self.prepare_data(idx)
File "/home/mutr_gu/anaconda3/envs/mmselfsup_23feb/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 789, in prepare_data
return self.pipeline(data_info)
File "/home/mutr_gu/anaconda3/envs/mmselfsup_23feb/lib/python3.8/site-packages/mmengine/dataset/base_dataset.py", line 58, in __call__
data = t(data)
File "/home/mutr_gu/anaconda3/envs/mmselfsup_23feb/lib/python3.8/site-packages/mmcv/transforms/base.py", line 12, in __call__
return self.transform(results)
File "/home/mutr_gu/Documents/mmsegmentation/mmseg/datasets/transforms/transforms.py", line 2422, in transform
results = self.aug(**results)
File "/home/mutr_gu/anaconda3/envs/mmselfsup_23feb/lib/python3.8/site-packages/albumentations/core/composition.py", line 299, in __call__
self.preprocess(data)
File "/home/mutr_gu/anaconda3/envs/mmselfsup_23feb/lib/python3.8/site-packages/albumentations/core/composition.py", line 326, in preprocess
raise ValueError(msg)
ValueError: Key img_path is not in available keys.
python-BaseException
Process finished with exit code 1
Bug fix
If you have already identified the reason, you can provide the information here. If you are willing to create a PR to fix it, please also leave a comment here and that would be much appreciated!