fast-bert
fast-bert copied to clipboard
CUDA out of memory when runnig the code from example
I tried to run the code from example on the fast-bert page, but got out of GPU memory error:
Exception has occurred: RuntimeError CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 6.00 GiB total capacity; 4.35 GiB already allocated; 84.91 MiB free; 4.44 GiB reserved in total by PyTorch)
How to make fast-bert to use less GPU memory? Which parameters to set?
modify max_seq_length and batch_size_per_gpu you can refer https://github.com/google-research/bert

Could you please share what values worked for you? I also cannot run the examples
only way (works for me each time I have the same issue) is to make a "factory reset runtime"
