toma
toma copied to clipboard
Helps you write algorithms in PyTorch that adapt to the available (CUDA) memory
Right now it is not possible to early out of any of the wrappers. For more advanced algorithms, this might be a requirement though.
First of all, many thanks for this handy utility package! My use-case is to detect the largest batchsize possible for translation during inference, which is rather low. I see that...
idea would be to add a helper to stacktrace.py that takes removes all frames from the caller up to the provided callback (and if the callback is not there, it...