Bringing-Old-Photos-Back-to-Life icon indicating copy to clipboard operation
Bringing-Old-Photos-Back-to-Life copied to clipboard

CUDA out of memory. Tried to allocate 171.24 GiB

Open longit123 opened this issue 2 years ago • 2 comments

Description: The test_img fold example imgs are running success,but the img i found in the google, it run error

Environment: python: 3.9.13 cuda: 11.7 torch 1.13.1+cu117 torchaudio 0.13.1+cu117 torchvision 0.14.1+cu117


D:\github\Bringing-Old-Photos-Back-to-Life>python run.py --input_folder D:/github/Bringing-Old-Photos-Back-to-Life/test_images/test_1 --output_folder D:/github/Bringing-Old-Photos-Back-to-Life/output/ --GPU 0 --with_scratch
Running Stage 1: Overall restoration
initializing the dataloader
model weights loaded
directory of testing image: D:\github\Bringing-Old-Photos-Back-to-Life\test_images\test_1
processing 20180404104855285.jpeg
You are using NL + Res
Now you are processing 20180404104855285..png
Skip 20180404104855285..png due to an error:
CUDA out of memory. Tried to allocate 171.24 GiB (GPU 0; 8.00 GiB total capacity; 1.94 GiB already allocated; 4.00 GiB free; 2.94 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Finish Stage 1 ...


Running Stage 2: Face Detection
Finish Stage 2 ...


Running Stage 3: Face Enhancement
The main GPU is
0
dataset [FaceTestDataset] of size 0 was created
The size of the latent vector size is [8,8]
Network [SPADEGenerator] was created. Total number of parameters: 92.1 million. To see the architecture, do print(network).
hi :)
Finish Stage 3 ...


Running Stage 4: Blending
Finish Stage 4 ...


All the processing is done. Please check the results.

longit123 avatar Feb 08 '23 08:02 longit123

Have the same problem : Mapping: You are using multi-scale patch attention, conv combine + mask input Now you are processing img435.png Traceback (most recent call last): File "/home/jacob/Bringing-Old-Photos-Back-to-Life/Global/test.py", line 168, in generated = model.inference(input, mask) File "/home/jacob/Bringing-Old-Photos-Back-to-Life/Global/models/mapping_model.py", line 333, in inference label_feat = self.netG_A.forward(input_concat, flow="enc") File "/home/jacob/Bringing-Old-Photos-Back-to-Life/Global/models/networks.py", line 287, in forward return self.encoder(input) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/container.py", line 217, in forward input = module(input) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, torch.cuda.OutOfMemoryError: HIP out of memory. Tried to allocate 7.72 GiB (GPU 0; 7.98 GiB total capacity; 1.13 GiB already allocated; 6.82 GiB free; 1.17 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_HIP_ALLOC_CONF Finish Stage 1 ...

HeliosDK avatar Feb 13 '23 20:02 HeliosDK

Try removing --with_scratch or reducing the size of the image

CristianUrbanoF avatar Apr 12 '23 23:04 CristianUrbanoF