pytorch-image-models
pytorch-image-models copied to clipboard
[FEATURE] Minimum required GPU RAM for different architectures
Is your feature request related to a problem? Please describe. Is the minimum required GPU memory for different architectures documented anywhere?
E.g., I want to know what GPU(s) I need to rent to be able to do a backward pass on ViT-g/14.
Describe the solution you'd like If not, it would be very helpful to add a sheet that documents this data for:
- Batch Size: 1
- forward-pass with no_grad
- forward-pass and backward-pass
- Batch Size: 10
- Batch Size: 100
I am not familiar with distributed inference/training; will the amount of GPU RAM needed be linearly divided when using multiple GPUs? Is the overhead of using multiple GPUs different for different models?
Describe alternatives you've considered The alternative is testing these manually, but this option is expensive as one needs to first get access to a big GPU. It also wastes everyone's time as everybody needs to do this by themselves.
The batch size in inference and train results tables is the ~min~ max batch size (with some reasonable step granularity) they run at (from a 1024 or 512 starting point for inf, train respectively). Doing those runs is rather incredibly long and time consuming so it's not run often.
Measuring actual GPU use is not particularly reliable, there is so much variability due to the way the allocation and kernel benchmarking works, you really have to try the batch size and see it succeed or fail to know if it works..