SDT icon indicating copy to clipboard operation
SDT copied to clipboard

CUDA out of memory

Open wushouwu opened this issue 7 months ago • 0 comments

我在style_samples放了 200 张 128*128px的图片 运行python user_generate.py --pretrained_model ./checkpoint/checkpoint-iter199999.pth --style_path ./style_samples 正常

我希望增加足够样本(6000张)让生成的字体更像样本字体

我在style_samples放了 255张 128*128px的图片 运行python user_generate.py --pretrained_model ./checkpoint/checkpoint-iter199999.pth --style_path ./style_samples 报内存不足错误

在6000张样本的前提下,应该怎么调整配置可以正常运行不会报以下错误

Traceback (most recent call last): File "D:\Downloads\SDT\user_generate.py", line 80, in main(opt) File "D:\Downloads\SDT\user_generate.py", line 52, in main preds = model.inference(img_list, char_img, 120) File "D:\Downloads\SDT\models\model.py", line 156, in inference style_embe = self.Feat_Encoder(style_imgs) File "C:\Python310\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Python310\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "C:\Python310\lib\site-packages\torch\nn\modules\container.py", line 250, in forward input = module(input) File "C:\Python310\lib\site-packages\torch\nn\modules\module.py", line 1739, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "C:\Python310\lib\site-packages\torch\nn\modules\module.py", line 1750, in _call_impl return forward_call(*args, **kwargs) File "C:\Python310\lib\site-packages\torch\nn\modules\batchnorm.py", line 193, in forward return F.batch_norm( File "C:\Python310\lib\site-packages\torch\nn\functional.py", line 2822, in batch_norm return torch.batch_norm( torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.98 GiB. GPU 0 has a total capacity of 4.00 GiB of which 0 bytes is free. Of the allocated memory 4.76 GiB is allocated by PyTorch, and 4.29 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

wushouwu avatar Jun 15 '25 15:06 wushouwu