hamiltorch icon indicating copy to clipboard operation
hamiltorch copied to clipboard

Added support for Cyclical HMC

Open kakodkar opened this issue 2 years ago • 2 comments

Added support for Cyclical HMC (https://arxiv.org/pdf/1902.03932.pdf). Changed progress bars to TQDM progress bars for better UI on Jupyter notebooks.

Would you be okay with me bumping up the minimum python required version to 3.8?

kakodkar avatar Sep 14 '21 20:09 kakodkar

Hi @kakodkar,

Thanks for contributing! I will go through the code and check that it all works before deciding whether to merge.

One question that I would need a bit of context for is why you want to use a cyclical step size for full HMC? I understand for stochastic gradient HMC where you need to anneal the step size due to a lack of a Metropolis-Hastings step, but for full HMC I would imagine that a cyclical learning rate might break some of the required behaviours of an MCMC algorithm. For example, when you adapt the step size for NUTS. You burn in to find a good step size that you then keep constant while you perform inference.

Also it might make sense for your implementation to import numpy once at the top of the samplers.py rather than within the function. Or you could put your learning rate function in util.py which already imports numpy if you didn't want to import it in the main samplers.py.

Once again thanks for the merge pull request!

Best,

Adam

AdamCobb avatar Sep 15 '21 00:09 AdamCobb

Hey Adam!

I was trying out an application where I felt a cyclic step size would improve acceptance and hence implemented it. I am still working on your feedback and some other bugs I found in my implementation and will let you know once the PR is fully ready for the next round of review.

Thanks for creating this awesome repository! Looking forward to it being available on Pypi.

Regards,

Mayank Kakodkar CS PhD Student, Purdue University

On Tue, Sep 14, 2021 at 7:46 PM AdamCobb @.***> wrote:

Hi @kakodkar https://github.com/kakodkar,

Thanks for contributing! I will go through the code and check that it all works before deciding whether to merge.

One question that I would need a bit of context for is why you want to use a cyclical step size for full HMC? I understand for stochastic gradient HMC where you need to anneal the step size due to a lack of a Metropolis-Hastings step, but for full HMC I would imagine that a cyclical learning rate might break some of the required behaviours of an MCMC algorithm. For example, when you adapt the step size for NUTS. You burn in to find a good step size that you then keep constant while you perform inference.

Also it might make sense for your implementation to import numpy once at the top of the samplers.py rather than within the function. Or you could put your learning rate function in util.py which already imports numpy if you didn't want to import it in the main samplers.py.

Once again thanks for the merge pull request!

Best,

Adam

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/AdamCobb/hamiltorch/pull/16#issuecomment-919611129, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAG5SCRGPOPIEHJZVBKVH3TUB7UEZANCNFSM5EA7J3UA . Triage notifications on the go with GitHub Mobile for iOS https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675 or Android https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub.

kakodkar avatar Sep 16 '21 15:09 kakodkar