mmocr icon indicating copy to clipboard operation
mmocr copied to clipboard

[BUG] ABCNet numpy.linalg.LinAlgError: SVD did not converge

Open pd162 opened this issue 2 years ago • 0 comments

Prerequisite

Task

I have modified the scripts/configs, or I'm working on my own tasks/models/datasets.

Branch

main branch https://github.com/open-mmlab/mmocr

Environment

sys.platform: linux
Python: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0]
CUDA available: True
numpy_random_seed: 2147483648
GPU 0: NVIDIA GeForce GTX 1080 Ti
GPU 1,2: NVIDIA GeForce RTX 2080 Ti
GPU 3,4,5: NVIDIA GeForce RTX 3090
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.6, V11.6.55
GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
PyTorch: 1.12.0+cu113
PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.3
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
  - CuDNN 8.3.2  (built against CUDA 11.5)
  - Magma 2.5.2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.3.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,

TorchVision: 0.13.0+cu113
OpenCV: 4.8.0
MMEngine: 0.8.4
MMOCR: 1.0.1+9551af6

Reproduces the problem - command or script

python tools/train.py configs/texte2e/abcnet/abcnet_resnet50_fpn_500e_pretrain.py --amp

Reproduces the problem - error message

Traceback (most recent call last):
  File "/data1/ljh/code/open-mmlab-new/mmocr/tools/train.py", line 114, in <module>
    main()
  File "/data1/ljh/code/open-mmlab-new/mmocr/tools/train.py", line 110, in main
    runner.train()
  File "/data1/ljh/anaconda3/envs/mmocr-new/lib/python3.10/site-packages/mmengine/runner/runner.py", line 1745, in train
    model = self.train_loop.run()  # type: ignore
  File "/data1/ljh/anaconda3/envs/mmocr-new/lib/python3.10/site-packages/mmengine/runner/loops.py", line 96, in run
    self.run_epoch()
  File "/data1/ljh/anaconda3/envs/mmocr-new/lib/python3.10/site-packages/mmengine/runner/loops.py", line 112, in run_epoch
    self.run_iter(idx, data_batch)
  File "/data1/ljh/anaconda3/envs/mmocr-new/lib/python3.10/site-packages/mmengine/runner/loops.py", line 128, in run_iter
    outputs = self.runner.model.train_step(
  File "/data1/ljh/anaconda3/envs/mmocr-new/lib/python3.10/site-packages/mmengine/model/base_model/base_model.py", line 114, in train_step
    losses = self._run_forward(data, mode='loss')  # type: ignore
  File "/data1/ljh/anaconda3/envs/mmocr-new/lib/python3.10/site-packages/mmengine/model/base_model/base_model.py", line 340, in _run_forward
    results = self(**data, mode=mode)
  File "/data1/ljh/anaconda3/envs/mmocr-new/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data1/ljh/code/open-mmlab-new/mmocr/mmocr/models/textdet/detectors/base.py", line 72, in forward
    return self.loss(inputs, data_samples)
  File "/data1/ljh/code/open-mmlab-new/mmocr/mmocr/models/texte2e/spotters/two_stage_text_spotting.py", line 75, in loss
    det_loss, data_samples = self.det_head.loss_and_predict(
  File "/data1/ljh/code/open-mmlab-new/mmocr/mmocr/models/textdet/heads/base.py", line 109, in loss_and_predict
    losses = self.module_loss(outs, data_samples)
  File "/data1/ljh/anaconda3/envs/mmocr-new/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/data1/ljh/code/open-mmlab-new/mmocr/mmocr/models/texte2e/losses/abcnet_det_module_loss.py", line 87, in forward
    labels, bbox_targets, bezier_targets = self.get_targets(
  File "/data1/ljh/code/open-mmlab-new/mmocr/mmocr/models/texte2e/losses/abcnet_det_module_loss.py", line 202, in get_targets
    labels_list, bbox_targets_list, bezier_targets_list = multi_apply(
  File "/data1/ljh/anaconda3/envs/mmocr-new/lib/python3.10/site-packages/mmdet/models/utils/misc.py", line 219, in multi_apply
    return tuple(map(list, zip(*map_results)))
  File "/data1/ljh/code/open-mmlab-new/mmocr/mmocr/models/texte2e/losses/abcnet_det_module_loss.py", line 251, in _get_targets_single
    beziers = gt_bboxes.new([poly2bezier(poly) for poly in polygons])
  File "/data1/ljh/code/open-mmlab-new/mmocr/mmocr/models/texte2e/losses/abcnet_det_module_loss.py", line 251, in <listcomp>
    beziers = gt_bboxes.new([poly2bezier(poly) for poly in polygons])
  File "/data1/ljh/code/open-mmlab-new/mmocr/mmocr/models/texte2e/utils/bezier_utils.py", line 49, in poly2bezier
    up_bezier = curve2bezier(up_curve)
  File "/data1/ljh/code/open-mmlab-new/mmocr/mmocr/models/texte2e/utils/bezier_utils.py", line 33, in curve2bezier
    pseudo_inv = np.linalg.pinv(bezier_coefficients(3, 4, cum_norm_dis))
  File "/data1/ljh/anaconda3/envs/mmocr-new/lib/python3.10/site-packages/numpy/linalg/linalg.py", line 2022, in pinv
    u, s, vt = svd(a, full_matrices=False, hermitian=hermitian)
  File "/data1/ljh/anaconda3/envs/mmocr-new/lib/python3.10/site-packages/numpy/linalg/linalg.py", line 1681, in svd
    u, s, vh = gufunc(a, signature=signature, extobj=extobj)
  File "/data1/ljh/anaconda3/envs/mmocr-new/lib/python3.10/site-packages/numpy/linalg/linalg.py", line 121, in _raise_linalgerror_svd_nonconvergence
    raise LinAlgError("SVD did not converge")
numpy.linalg.LinAlgError: SVD did not converge

Additional information

When I tried to reduce the training image size (RandomChoiceResize) in the config file, the error occured. I would like to know why this error relates to image size and how to solve it.

pd162 avatar Aug 21 '23 06:08 pd162