SeeSR
SeeSR copied to clipboard
Cuda out of memory(Training process)
Hellow ! I follow the following settings, and I used the NVIDIA GeForce RTX 3090 (24GB) to run the trianing code. However, I met the problem of cuda out of memory. Is it because the VRAM of the 3090ti graphics card is insufficient for training?
single gpu
CUDA_VISIBLE_DEVICES="0," accelerate launch train_seesr.py
--pretrained_model_name_or_path="preset/models/stable-diffusion-2-base"
--output_dir="./experience/seesr"
--root_folders 'preset/datasets/train_datasets/training_for_seesr'
--ram_ft_path 'preset/models/DAPE.pth'
--enable_xformers_memory_efficient_attention
--mixed_precision="fp16"
--resolution=512
--learning_rate=5e-5
--train_batch_size=1
--gradient_accumulation_steps=2
--null_text_ratio=0.5
--dataloader_num_workers=0
--checkpointing_steps=10000