deep-text-recognition-benchmark icon indicating copy to clipboard operation
deep-text-recognition-benchmark copied to clipboard

CUDA out of memory.

Open penghusile opened this issue 3 years ago • 1 comments

Hello, when I am reproducing the vit-tiny model, I use four 2080ti GPU according to your configuration and it still does not work. It prompts CUDA out of memory. What is the reason? My configuration file is as follows

RANDOM=$$
GPU=0,1,2,3
CUDA_VISIBLE_DEVICES=${GPU} \
python3 train.py --train_data data_lmdb_release/training \
--valid_data data_lmdb_release/evaluation \
--select_data MJ-ST \
--batch_ratio 0.5-0.5 \
--Transformation None \
--FeatureExtraction None \
--SequenceModeling None \
--Prediction None \
--Transformer \
--TransformerModel vitstr_tiny_patch16_224 \
--imgH 224 \
--imgW 224 \
--manualSeed=$RANDOM \
--sensitive \
--valInterval 5000 \
--workers 6 \
--batch_size 48

penghusile avatar Nov 28 '21 06:11 penghusile

It should run on even on a single GPU. For instance, running the same script, the memory consumption is:

| 3 N/A N/A 125879 C python3 9087MiB |

roatienza avatar Nov 28 '21 07:11 roatienza