Make iter persistent for AdagradW
Summary: Make iter persistent for AdagradW optimizer state saving. This is to avoid potential loss of the iter information when training is restarted.
Differential Revision: D74717848
Deploy Preview for pytorch-fbgemm-docs ready!
| Name | Link |
|---|---|
| Latest commit | f3e56fe19e946b865e1eb5f87a34c4830f65e65f |
| Latest deploy log | https://app.netlify.com/projects/pytorch-fbgemm-docs/deploys/686ec616b82890000829ed24 |
| Deploy Preview | https://deploy-preview-4147--pytorch-fbgemm-docs.netlify.app |
| Preview on mobile | Toggle QR Code...Use your smartphone camera to open QR code link. |
To edit notification comments on pull requests, go to your Netlify project configuration.
This pull request was exported from Phabricator. Differential Revision: D74717848
This pull request was exported from Phabricator. Differential Revision: D74717848
This pull request was exported from Phabricator. Differential Revision: D74717848
This pull request was exported from Phabricator. Differential Revision: D74717848
This pull request was exported from Phabricator. Differential Revision: D74717848
This pull request was exported from Phabricator. Differential Revision: D74717848