torch.cuda.OutOfMemoryError: CUDA out of memory.
各位大神好,我分别用4090和A100 (40G版)运行这个项目,都遇到同样的报错,说是显存不够:
text_config_dict is provided which will be used to initialize CLIPTextConfig. The value text_config["id2label"] will be overriden.
text_config_dict is provided which will be used to initialize CLIPTextConfig. The value text_config["bos_token_id"] will be overriden.
text_config_dict is provided which will be used to initialize CLIPTextConfig. The value text_config["eos_token_id"] will be overriden.
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:04<00:00, 4.62s/it]
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:06<00:00, 6.23s/it]
Initial seed: 1536610237
0%| | 0/20 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/root/autodl-tmp/.tr/ot/run/run_ootd.py", line 71, in
请问可能是什么原因呢?还有,想问下大家,你们是用多大显存的GPU跑的?
same problem
I used to encounter this issue and it seems like loading and processing png files are computationally expensive, try using jpg or jpeg files and it should be sorted
Despite the resizing in run_ootd.py, the model and cloth images need to be 768x1024. When using arbitrary image resolutions it causes CUDA out-of-memory errors
Same problem with A100 -40G. I try to use Xformers to solve it. It works. But my question is why the authors can run it successfully without Xformers
XinZhang0526
Can you please share your python version and requirements.txt, please? I installed xformers but encounter the same error. Thanks
Same problem with A100 -40G. I try to use Xformers to solve it. It works. But my question is why the authors can run it successfully without Xformers
how to add xformers in code, could you show more specific code
@XinZhang0526 Please could you share pip list Need to see what version of xformers work
@nitinmukesh @Borismartirosyan In fact, here is my version of xformers xformers == 0.0.22 torch ==1.13.1+cu116
unet_vton.enable_xformers_memory_efficient_attention() unet_garm.enable_xformers_memory_efficient_attention()
By the way, torch >= 2.0 is recommended.
其实这是我的 xformers 版本 xformers == 0.0.22 火炬 ==1.13.1+cu116
unet_vton.enable_xformers_memory_efficient_attention() unet_garm.enable_xformers_memory_efficient_attention()
顺便说一句,建议使用 torch >= 2.0。
请问这两行代码要添加到哪个文件中