GPU outmemery
when run by GPU,errinfo:
Loaded detection model vikp/surya_det3 on device cuda with dtype torch.float16
Loaded recognition model vikp/surya_rec2 on device cuda with dtype torch.float16
Detecting bboxes: 0%| | 0/6 [00:04<?, ?it/s]
Traceback (most recent call last):
File "/root/miniconda3/envs/surya/bin/surya_ocr", line 8, in
but run nvidia-smi:Mon Oct 14 02:30:44 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.120 Driver Version: 550.120 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3080 Off | 00000000:4B:00.0 Off | N/A |
| 42% 43C P0 87W / 320W | 1MiB / 10240MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
| 1 NVIDIA GeForce RTX 3080 Off | 00000000:B1:00.0 Off | N/A |
| 42% 36C P0 83W / 320W | 1MiB / 10240MiB | 0% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | No running processes found |
Resize the parameter about *_BATCH_SIZE
torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.03 GiB. GPU 0 has a total capacity of 3.80 GiB of which 938.56 MiB is free. Process 3323 has 6.15 MiB memory in use. Including non-PyTorch memory, this process has 2.87 GiB memory in use. Of the allocated memory 2.77 GiB is allocated by PyTorch, and 6.00 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)
I am getting this issue. anybody can help on that how to resolve this