FT-w2v2-ser
FT-w2v2-ser copied to clipboard
run_downstream_custom_multiple_fold.py CUDA out of memory
Got the following when running run_downstream_custom_multiple_fold.py RuntimeError: CUDA out of memory. Tried to allocate 730.00 MiB (GPU 0; 23.70 GiB total capacity; 21.65 GiB already allocated; 426.81 MiB free; 21.81 GiB reserved in total by PyTorch)
I have NVIDIA GeForce RTX 3090 with 24GB.
Any insights on how to workaround it?
me tooo...... I think we have to use multi-gpu
@zxpan You can reduce the batch size from 64 to 32.