tuning_playbook
tuning_playbook copied to clipboard
A playbook for systematically maximizing the performance of deep learning models.
Hello, thank you for the great work. I'm trying to translate this playbook into Chinese to help more people. Should I create a Pull Request for it?
This is a great doc, thanks for putting together so much accumulated wisdom in one place! In the section "[Changing the batch size requires re-tuning most hyperparameters](https://github.com/google-research/tuning_playbook#changing-the-batch-size-requires-re-tuning-most-hyperparameters)" I think it...
Hello, thank you for the great work. I know this paper: https://arxiv.org/abs/2007.01547 that benchmarks a lot of optimizers with a lot of configuration. I believe this paper will greatly benefit...
FAQs -> What are the update rules for all the popular optimization algorithms? -> Nesterov A closing parenthesis is missing in the third equation. It's currently $$\theta_{t+1} = \theta_{t} -...
Some links in the table-of-content are broken. This pull request will fix them.
Hi, Thanks for making such a comprehensive document. I created a PDF version of it. Hope this helps!
### Discussed in https://github.com/google-research/tuning_playbook/discussions/3 Originally posted by **madaan** January 19, 2023 Thanks, the playbook looks pretty cool! I am curious about: > Normalization should be the last operation before the...
e.g., https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.qmc.Halton.html#scipy.stats.qmc.Halton This is likely to be better maintained than the MLCommons code.
[](https://workerb.linearb.io/v2/badge/collaboration-page?magicLinkId=McYGxlD) One major typo fixed (Papers -> People). Added comma wherever missing.