TornadoVM
TornadoVM copied to clipboard
Allocate big arrays unsupported
Describe the bug
When running Execution Plans with large allocations (e.g., Arrays with 1>= GBs), the TornadoVM runtime throws an exception regarding memory, even though the memory usage is set to large buffers:
tornado-test -V --fast --jvm="-Dtornado.device.memory=2048MB" uk.ac.manchester.tornado.unittests.multithreaded.MultiThreaded
tornado --jvm "-Xmx6g -Dtornado.recover.bailout=False -Dtornado.unittests.verbose=True -Dtornado.device.memory=2048MB" -m tornado.unittests/uk.ac.manchester.tornado.unittests.tools.TornadoTestRunner --params "uk.ac.manchester.tornado.unittests.multithreaded.MultiThreaded"
WARNING: Using incubator modules: jdk.incubator.vector
Test: class uk.ac.manchester.tornado.unittests.multithreaded.MultiThreaded
Running test: test01 ................ [FAILED]
\_[REASON] Unable to allocate 1073741848 bytes of memory.
Expected behavior
The TornadoVM runtime should either be able to allocate large buffers, or launch another type of exception that specifies that the new size is not supported. For example, in some platforms (like OpenCL), this might not be possible.
Computing system setup (please complete the following information):
- OS: Fedora 39:
6.6.14-200.fc39.x86_64
- OpenCL and Driver versions:
OpenCL 3.0 CUDA 12.3.99
- If applicable, PTX and CUDA Driver versions
- If applicable, Level Zero & SPIR-V Versions
- TornadoVM commit id: 77dfc9b9fadb1c0dfc379427694f440562c497e6