mmocr icon indicating copy to clipboard operation
mmocr copied to clipboard

Training DBNet in icdar2015,but the hmena-iou is always 0

Open wangxianrui opened this issue 2 years ago • 7 comments

The situations in the reimplementation issues

Reimplement a model in the model zoo using the provided configs

Describe the issue I trained the DBNet in icdar2015 with provided configs,but the hmena-iou is always 0 the config is dbnet_resnet18 in mmocr/configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py the dataset is icdar2015 download follow https://mmocr.readthedocs.io/en/dev-1.x/user_guides/data_prepare/det.html

Reproduction

  1. What command or script did you run?
python tools/train.py configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py --work-dir dbnet
python tools/test.py configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py dbnet/latest.pth --eval hmean-iou
  1. What config dir you run?
https://github.com/open-mmlab/mmocr/blob/main/configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py
  1. Did you make any modifications on the code or config? Did you understand what you have modified?
I just modify the path of resnet pretrained model to load the same parameters offline
init_cfg=dict(type='Pretrained', checkpoint="../resnet18-f37072fd.pth"),
  1. What dataset did you use?
icdar2015 without modification

Environment

  1. Please run python mmocr/utils/collect_env.py to collect necessary environment information and paste it here.
/data1/mmproject/mmcv-1.7.1/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
  warnings.warn(
sys.platform: linux
Python: 3.8.5 (default, Sep  4 2020, 07:30:14) [GCC 7.3.0]
CUDA available: True
GPU 0: NVIDIA Tesla P40
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 11.3, V11.3.58
GCC: gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
PyTorch: 1.10.0+cu113
PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.3
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
  - CuDNN 8.2
  - Magma 2.5.2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, 

TorchVision: 0.11.1+cu113
OpenCV: 4.7.0
MMCV: 1.7.1
MMCV Compiler: GCC 9.3
MMCV CUDA Compiler: 11.3
MMOCR: 0.6.3+c4259cd
  1. You may add addition that may be helpful for locating the problem, such as
  • I installed the mmcv mmdet mmocr offline
cd mmcv-1.7.1
MMCV_WITH_OPS=1 MAX_JOBS=8 pip install -e .
cd ..

cd mmdetection-2.26.0
MAX_JOBS=8 pip install -e .
cd ..

cd mmocr-0.6.3
MAX_JOBS=8 pip install -e .
cd ..
  • when I execute the training script, got an error note this
data1/mmproject/mmcv-1.7.1/mmcv/__init__.py:20: UserWarning: On January 1, 2023, MMCV will release v2.0.0, in which it will remove components related to the training process and add a data transformation module. In addition, it will rename the package names mmcv to mmcv-lite and mmcv-full to mmcv. See https://github.com/open-mmlab/mmcv/blob/master/docs/en/compatibility.md for more details.
  warnings.warn(
Traceback (most recent call last):
  File "tools/train.py", line 18, in <module>
    from mmocr.apis import init_random_seed, train_detector
  File "/data1/mmproject/mmocr-0.6.3/mmocr/apis/__init__.py", line 2, in <module>
    from .inference import init_detector, model_inference
  File "/data1/mmproject/mmocr-0.6.3/mmocr/apis/inference.py", line 14, in <module>
    from mmocr.models import build_detector
  File "/data1/mmproject/mmocr-0.6.3/mmocr/models/__init__.py", line 2, in <module>
    from . import common, kie, textdet, textrecog
  File "/data1/mmproject/mmocr-0.6.3/mmocr/models/textdet/__init__.py", line 2, in <module>
    from . import dense_heads, detectors, losses, necks, postprocess
  File "/data1/mmproject/mmocr-0.6.3/mmocr/models/textdet/dense_heads/__init__.py", line 3, in <module>
    from .drrg_head import DRRGHead
  File "/data1/mmproject/mmocr-0.6.3/mmocr/models/textdet/dense_heads/drrg_head.py", line 11, in <module>
    from mmocr.models.textdet.modules import GCN, LocalGraphs, ProposalLocalGraphs
  File "/data1/mmproject/mmocr-0.6.3/mmocr/models/textdet/modules/__init__.py", line 4, in <module>
    from .proposal_local_graph import ProposalLocalGraphs
  File "/data1/mmproject/mmocr-0.6.3/mmocr/models/textdet/modules/proposal_local_graph.py", line 8, in <module>
    from mmocr.models.textdet.postprocess.utils import fill_hole
  File "/data1/mmproject/mmocr-0.6.3/mmocr/models/textdet/postprocess/__init__.py", line 8, in <module>
    from .textsnake_postprocessor import TextSnakePostprocessor
  File "/data1/mmproject/mmocr-0.6.3/mmocr/models/textdet/postprocess/textsnake_postprocessor.py", line 6, in <module>
    from skimage.morphology import skeletonize
  File "/usr/local/miniconda3/lib/python3.8/site-packages/skimage/__init__.py", line 125, in <module>
    from .util.dtype import (img_as_float32,
  File "/usr/local/miniconda3/lib/python3.8/site-packages/skimage/util/__init__.py", line 17, in <module>
    from ._map_array import map_array
  File "/usr/local/miniconda3/lib/python3.8/site-packages/skimage/util/_map_array.py", line 2, in <module>
    from ._remap import _map_array
  File "skimage/util/_remap.pyx", line 1, in init skimage.util._remap
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 96 from C header, got 80 from PyObject
  • I solved this problem with upgrade scikit-image
pip install scikit-image --upgrade
  • the training loss is bigger than pretrained log file
{"mode": "train", "epoch": 1200, "iter": 5, "lr": 1e-05, "memory": 6690, "data_time": 2.07332, "loss_prob": 2.81168, "loss_db": 0.99994, "loss_thr": 1.15052, "loss": 4.96214, "time": 2.85689}
{"mode": "train", "epoch": 1200, "iter": 10, "lr": 1e-05, "memory": 6690, "data_time": 0.58603, "loss_prob": 2.81168, "loss_db": 0.99995, "loss_thr": 1.13788, "loss": 4.94951, "time": 1.27871}
{"mode": "train", "epoch": 1200, "iter": 15, "lr": 1e-05, "memory": 6690, "data_time": 0.09068, "loss_prob": 2.81168, "loss_db": 0.9999, "loss_thr": 1.1373, "loss": 4.94888, "time": 0.77812}
{"mode": "train", "epoch": 1200, "iter": 20, "lr": 1e-05, "memory": 6690, "data_time": 0.36837, "loss_prob": 2.81168, "loss_db": 0.99995, "loss_thr": 1.13829, "loss": 4.94992, "time": 0.94002}
{"mode": "train", "epoch": 1200, "iter": 25, "lr": 1e-05, "memory": 6690, "data_time": 0.24403, "loss_prob": 2.81168, "loss_db": 0.99995, "loss_thr": 1.15232, "loss": 4.96394, "time": 0.82041}
{"mode": "train", "epoch": 1200, "iter": 30, "lr": 1e-05, "memory": 6690, "data_time": 0.24629, "loss_prob": 2.81168, "loss_db": 0.99995, "loss_thr": 1.14834, "loss": 4.95997, "time": 0.83338}
{"mode": "train", "epoch": 1200, "iter": 35, "lr": 1e-05, "memory": 6690, "data_time": 0.23009, "loss_prob": 2.81168, "loss_db": 0.99375, "loss_thr": 1.14394, "loss": 4.94937, "time": 0.79826}
{"mode": "train", "epoch": 1200, "iter": 40, "lr": 1e-05, "memory": 6690, "data_time": 0.26886, "loss_prob": 2.81168, "loss_db": 0.99995, "loss_thr": 1.14935, "loss": 4.96098, "time": 0.92835}
{"mode": "train", "epoch": 1200, "iter": 45, "lr": 1e-05, "memory": 6690, "data_time": 0.35844, "loss_prob": 2.81168, "loss_db": 0.98607, "loss_thr": 1.14583, "loss": 4.94359, "time": 0.95987}
{"mode": "train", "epoch": 1200, "iter": 50, "lr": 1e-05, "memory": 6690, "data_time": 0.23196, "loss_prob": 2.81168, "loss_db": 0.99237, "loss_thr": 1.13636, "loss": 4.94041, "time": 0.77791}
{"mode": "train", "epoch": 1200, "iter": 55, "lr": 1e-05, "memory": 6690, "data_time": 0.22315, "loss_prob": 2.81168, "loss_db": 0.99995, "loss_thr": 1.15512, "loss": 4.96675, "time": 0.79925}
{"mode": "train", "epoch": 1200, "iter": 60, "lr": 1e-05, "memory": 6690, "data_time": 0.33663, "loss_prob": 2.81168, "loss_db": 0.99995, "loss_thr": 1.1542, "loss": 4.96583, "time": 0.84648}
{"mode": "val", "epoch": 1200, "iter": 500, "lr": 1e-05, "0_hmean-iou:recall": 0.0, "0_hmean-iou:precision": 0.0, "0_hmean-iou:hmean": 0.0}

Results

  • I evaluate the model with python tools/test.py configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py dbnet/latest.pth --eval hmean-iou, and get the result
Evaluating ../icdar2015/instances_test.json with 500 images now

Evaluating hmean-iou...
thr 0.30, recall: 0.000, precision: 0.000, hmean: 0.000
thr 0.40, recall: 0.000, precision: 0.000, hmean: 0.000
thr 0.50, recall: 0.000, precision: 0.000, hmean: 0.000
thr 0.60, recall: 0.000, precision: 0.000, hmean: 0.000
thr 0.70, recall: 0.000, precision: 0.000, hmean: 0.000
thr 0.80, recall: 0.000, precision: 0.000, hmean: 0.000
thr 0.90, recall: 0.000, precision: 0.000, hmean: 0.000
{'0_hmean-iou:recall': 0.0, '0_hmean-iou:precision': 0.0, '0_hmean-iou:hmean': 0.0}
  • I also evaluate the model download from https://download.openmmlab.com/mmocr/textdet/dbnet/dbnet_r18_fpnc_sbn_1200e_icdar2015_20210329-ba3ab597.pth with python tools/test.py configs/textdet/dbnet/dbnet_r18_fpnc_1200e_icdar2015.py ../dbnet_r18_fpnc_sbn_1200e_icdar2015_20210329-ba3ab597.pth --eval hmean-iou, and get the result
Evaluating ../icdar2015/instances_test.json with 500 images now

Evaluating hmean-iou...
thr 0.30, recall: 0.764, precision: 0.749, hmean: 0.757
thr 0.40, recall: 0.764, precision: 0.763, hmean: 0.764
thr 0.50, recall: 0.763, precision: 0.793, hmean: 0.778
thr 0.60, recall: 0.756, precision: 0.833, hmean: 0.793
thr 0.70, recall: 0.731, precision: 0.871, hmean: 0.795
thr 0.80, recall: 0.577, precision: 0.926, hmean: 0.711
thr 0.90, recall: 0.073, precision: 0.950, hmean: 0.136
{'0_hmean-iou:recall': 0.7308618199325951, '0_hmean-iou:precision': 0.8714121699196326, '0_hmean-iou:hmean': 0.7949725058915947}

wangxianrui avatar Jan 06 '23 08:01 wangxianrui

TBH I can't tell the reason either. Did you use 1 GPU only so that the actual batch size (samples_per_gpu * num_gpus) is 16? Different combinations of learning rate and batch size would result in different training results. Another possible reason would be the issue of PyTorch. It is known that PyTorch 1.10 has some bugs that could affect the performance of some of our models, though we are still investigating the reasons. You may try to upgrade PyTorch to 1.11 or downgrade it to 1.9 for a stable experience.

As a side note, it seems you were reading the docs for MMOCR 1.0 but still using MMOCR 0.6.3. In this particular case, the dataset preparation steps work for both versions. However, we will no longer maintain 0.x version, and it's a better choice to upgrade to MMOCR 1.0 asap for an overall better experience and support.

gaotongxiao avatar Jan 06 '23 11:01 gaotongxiao

Batch size may cause performance degradation,but it should not be the main cause of 0 hmean-iou. And I will try another PyTorch version for this also, Thank you for your advice.

The docs and mmocr are coincident,they are all 0.6.3, I just copy the error link. I use the 0.X MMOCR, because I am familiar with the older MMCV MMDET. If the upgraded Pytorch version does not work, I will try to upgrade to 1.0 version.

Besides, After I install the mmcv、mmdet、mmocr,I can't execute the training scripts, and I have to upgrade scikit-image, is there a problem here?

Thanks !

the pip list with upgrade scikit image is here absl-py 1.0.0 addict 2.4.0 anyio 3.6.2 argon2-cffi 21.3.0 argon2-cffi-bindings 21.2.0 arrow 1.2.3 asttokens 2.2.1 attrs 22.2.0 backcall 0.2.0 beautifulsoup4 4.11.1 bleach 5.0.1 brotlipy 0.7.0 certifi 2020.12.5 cffi 1.14.3 chardet 3.0.4 cmake 3.22.0 comm 0.1.2 conda 4.9.2 conda-package-handling 1.7.2 cryptography 3.2.1 cycler 0.11.0 Cython 0.29.25 debugpy 1.6.5 decorator 5.1.1 defusedxml 0.7.1 easydict 1.9 entrypoints 0.4 executing 1.2.0 fastjsonschema 2.16.2 fastrlock 0.8 fire 0.4.0 fonttools 4.28.3 fqdn 1.5.1 future 0.18.2 grpcio 1.42.0 idna 2.10 imageio 2.13.3 imgaug 0.4.0.1 importlib-metadata 4.8.2 importlib-resources 5.10.2 ipykernel 6.19.4 ipython 8.8.0 ipython-genutils 0.2.0 isoduration 20.11.0 jedi 0.18.2 Jinja2 3.1.2 jsonpointer 2.3 jsonschema 4.17.3 jupyter_client 7.4.8 jupyter_core 5.1.2 jupyter-events 0.5.0 jupyter_server 2.0.6 jupyter_server_terminals 0.4.3 jupyterlab-pygments 0.2.2 kiwisolver 1.3.2 lanms-neo 1.0.2 lmdb 1.4.0 Markdown 3.3.6 MarkupSafe 2.1.1 matplotlib 3.5.0 matplotlib-inline 0.1.6 mistune 2.0.4 mmcv-full 1.7.1 /data1/mmproject/mmcv-1.7.1 mmdet 2.26.0 /data1/mmproject/mmdetection-2.26.0 mmocr 0.6.3 /data1/mmproject/mmocr-0.6.3 nbclassic 0.4.8 nbclient 0.7.2 nbconvert 7.2.7 nbformat 5.7.1 nest-asyncio 1.5.6 networkx 2.6.3 ninja 1.10.2.3 notebook 6.5.2 notebook_shim 0.2.2 numpy 1.21.4 onnx 1.8.0 opencv-python 4.7.0.68 packaging 21.3 pandas 1.3.4 pandocfilters 1.5.0 parso 0.8.3 pexpect 4.8.0 pickleshare 0.7.5 Pillow 8.4.0 pip 21.3.1 pkgutil_resolve_name 1.3.10 platformdirs 2.6.2 pprint 0.1 prometheus-client 0.15.0 prompt-toolkit 3.0.36 protobuf 3.19.1 psutil 5.9.4 ptyprocess 0.7.0 pure-eval 0.2.2 pyclipper 1.3.0.post4 pycocotools 2.0.6 pycosat 0.6.3 pycparser 2.20 Pygments 2.14.0 pyOpenSSL 19.1.0 pyparsing 3.0.6 pyrsistent 0.19.3 PySocks 1.7.1 python-dateutil 2.8.2 python-json-logger 2.0.4 pytz 2021.3 PyWavelets 1.2.0 PyYAML 6.0 pyzmq 24.0.1 rapidfuzz 2.13.7 requests 2.24.0 rfc3339-validator 0.1.4 rfc3986-validator 0.1.1 ruamel_yaml 0.15.87 scikit-image 0.19.3 scipy 1.7.3 Send2Trash 1.8.0 setuptools 50.3.1.post20201107 setuptools-scm 6.3.2 shapely 2.0.0 six 1.15.0 sniffio 1.3.0 soupsieve 2.3.2.post1 stack-data 0.6.2 tensorboard 1.15.0 termcolor 1.1.0 terminado 0.17.1 terminaltables 3.1.10 tifffile 2021.11.2 tinycss2 1.2.1 tomli 1.2.2 torch 1.10.0+cu113 torchstat 0.0.7 torchsummary 1.5.1 torchvision 0.11.1+cu113 tornado 6.2 tqdm 4.51.0 traitlets 5.8.0 typing 3.7.4.3 typing_extensions 4.0.1 uri-template 1.2.0 urllib3 1.25.11 wcwidth 0.2.5 webcolors 1.12 webencodings 0.5.1 websocket-client 1.4.2 Werkzeug 2.0.2 wheel 0.35.1 yapf 0.32.0 zipp 3.6.0

wangxianrui avatar Jan 07 '23 02:01 wangxianrui

scikit-image is only responsible for visualization and hence is not a problem

gaotongxiao avatar Jan 11 '23 03:01 gaotongxiao

You can reduce learning rate

roomeo avatar Jan 12 '23 03:01 roomeo

Same for me, following this tutorial: https://mmocr.readthedocs.io/en/dev-1.x/get_started/quick_run.html

with this setup:

------------------------------------------------------------
System environment:
    sys.platform: linux
    Python: 3.9.16 (main, Dec  7 2022, 01:12:08) [GCC 11.3.0]
    CUDA available: True
    numpy_random_seed: 914112880
    GPU 0: NVIDIA GeForce RTX 3060
    CUDA_HOME: /usr/local/cuda
    NVCC: Cuda compilation tools, release 11.7, V11.7.64
    GCC: x86_64-linux-gnu-gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
    PyTorch: 1.13.1+cu117
    PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.7
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
  - CuDNN 8.5
  - Magma 2.6.1
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.7, CUDNN_VERSION=8.5.0, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.13.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 

    TorchVision: 0.14.1+cu117
    OpenCV: 4.7.0
    MMEngine: 0.4.0

Runtime environment:
    cudnn_benchmark: True
    mp_cfg: {'mp_start_method': 'fork', 'opencv_num_threads': 0}
    dist_cfg: {'backend': 'nccl'}
    seed: None
    Distributed launcher: none
    Distributed training: False
    GPU number: 1
------------------------------------------------------------

maxi-w avatar Jan 18 '23 19:01 maxi-w

I couldn't get it to work with custom dataset.

yCobanoglu avatar Feb 14 '23 08:02 yCobanoglu

Thank you all for the feedback. We will add a colab notebook to ensure the results are reproducible.

gaotongxiao avatar Feb 14 '23 09:02 gaotongxiao