tuning_playbook icon indicating copy to clipboard operation
tuning_playbook copied to clipboard

About contributing a paper related to choosing optimizer

Open minhlong94 opened this issue 2 years ago • 2 comments

Hello, thank you for the great work.

I know this paper: https://arxiv.org/abs/2007.01547 that benchmarks a lot of optimizers with a lot of configuration. I believe this paper will greatly benefit readers in choosing optimizers. Should I create a Pull Request for it?

minhlong94 avatar Jan 20 '23 17:01 minhlong94

My impression is that the set of datasets and tasks used in that paper may not be diverse enough (in terms of dataset sizes and task types) to serve as a general guidance, though I am not an optimization expert.

Yura52 avatar Jan 20 '23 17:01 Yura52

now we have this paper which has diverse enough dataset x model combinations and tests popular optimizers with well-defined search spaces + ranks them based on performance profile scores : https://arxiv.org/abs/2306.07179 👍

sourabh2k15 avatar Oct 09 '23 05:10 sourabh2k15

Frank, the first author of the paper you mention, has done some follow-up work as part of the MLCommons Algorithms Working Group that we co-chair. I think at this point it would be more appropriate to create a PR linking to https://arxiv.org/abs/2306.07179. The crowded valley paper isn't what I would be looking at these days to select update rules/optimizers.

If you are interested in creating such a pull request, I could review it. If not, we will probably eventually update the playbook to connect more to recent research on benchmarking training algorithms through AlgoPerf (https://github.com/mlcommons/algorithmic-efficiency).

georgedahl avatar Jun 05 '24 22:06 georgedahl