automatic-mixed-precision-tutorials-pytorch
automatic-mixed-precision-tutorials-pytorch copied to clipboard
Memory usage was not decreased when using amp
Hi,
I was trying your code couple of days ago. It turned out when i used the '--amp' parameter, the memory usage was not decreased (around 10G of of my 2080Ti 11G). The training time was also almost the same without amp.
How did you get the results?
Thank you.
Hi, Thanks for opening the Issue. I have had a similar experience with Video Recognition. Memory usage may decrease when an operation optimized for the AMP function is used, but otherwise, the effect may not be seen.
For detailed cause analysis, it seems to be necessary to check the code and architecture used in the experiment. Thanks!