RuntimeError: CUDA out of memory
Having some issues with the CUDA memory allocation. Is there a way around this or maybe I can just use CPU to train instead? What do I need to comment out?
RuntimeError: CUDA out of memory. Tried to allocate 14.00 MiB (GPU 0; 3.95 GiB total capacity; 2.56 GiB already allocated; 10.88 MiB free; 2.57 GiB reserved in total by PyTorch)
I'm using a GeForce GTX 1050 Mobile card so I understand that it's not exactly built for high end processing
Can you describe the setup? Have you tried the demo Colab notebook?
Yes, it should work on CPU.
Can you describe the setup? Have you tried the demo Colab notebook?
Yes, it should work on CPU.
How do I change the code to force run on CPU?
Can you describe the setup? Have you tried the demo Colab notebook?
Yes, it should work on CPU.
sorry, accidentally closed the issue. I followed your README as shown on the repo page and even decreased the amount of questions generated. My card is not built for heavy processing like this as it only has 2GB of RAM
@danltw please see this line of code: https://github.com/artitw/text2text/blob/e6bc1fbd24346b470168837797346f08d88736d9/text2text/text_generator.py#L89
Currently, we attempt to use GPU and fallback on CPU if it doesn’t exist. Would you be interested in submitting a pull request adding functionality for specifying which device to use so that users can force CPU if necessary?