lion-pytorch
lion-pytorch copied to clipboard
Same amount of VRAM is taken as in AdamW
One of the main benefits of LION, is it needs to save less data for each param. Adam needs to save Momentum and RMSProp ema's, while in LION we need to save only momentum ema. When I try to use LION, it takes exactly the same amount of memory as AdamW
Hi, what is the model size in your setting? When the model is small, I think the main memory overhead comes from the activation, so the saved second moment may not be significant.
@xiangning-chen 178m parameters, convolutional.
Are you comparing this to AdamW8bit by chance?
No, to AdamW
In my setting, Lion takes less memory than AdamW (9.9 Gb vs 10.1Gb) but Lion is slower in terms of steps/sec. Has anyone noticed the same? I compare Lion with triton vs fused AdamW.
do you solve the problem? i have the same problem.