waifu2x
waifu2x copied to clipboard
Use Shared GPU Memory instead of Dedicated GPU Memory
Hi! I wanted to ask if there was a way to utilize the shared GPU memory instead of the dedicated one. Even though it is a little slower, There is a lot more of it.
(I may not understand what are you asking about.) I do not know whether there is shared memory for GPUs. In training process, you can use NVIDIA NCCL (NVIDIA Collective Communications Library) for multi-GPUs, if you installed NVIDIA NCCL and my modified version of nccl.torch.
If you got GPU out of memory error, you can avoid it with -crop_size
option (e.g. -crop_size 128).
It is the shared memory windows allocates to a gpu in the event you run out of VRAM during a game. In gaming the driver handles this by dumping VRAM contents into RAM. CUDA supports this with shared memory, or unified memory, something like that, but it requires explicit programming to do so.