dask-patternsearch icon indicating copy to clipboard operation
dask-patternsearch copied to clipboard

Benchmark functions to experiment on

Open eriknw opened this issue 8 years ago • 4 comments

We should have a collection of benchmark functions to run pattern search on. It would also be great to have tools and Jupyter notebooks so users can easily try things out, tweak parameters, and visually see what's going on.

So, what benchmark functions should we include, and what tooling would be nice to have?

I think the following benchmark functions would be nice to have: http://www.ntu.edu.sg/home/EPNSugan/index_files/CEC2013/Definitions%20of%20%20CEC%2013%20benchmark%20suite%200117.pdf

eriknw avatar Apr 06 '17 23:04 eriknw

SKLearn hyper-parameter searches would be a nice candidate. @jcrist may be able to provide a couple of examples. For general problems you would probably want to support categories and integers.

mrocklin avatar Apr 07 '17 11:04 mrocklin

I agree that hyper-parameter searches are an important use case (dask/dask-searchcv#32 actually prompted me to finally start this project) that we test and show off.

For general problems you would probably want to support categories and integers.

We now support integers (see #11). I'm not sure how we should support categoricals. If you have any suggestions how to do so, please share in #7.

eriknw avatar Apr 10 '17 16:04 eriknw

Here are the SciPy benchmark functions:

https://github.com/scipy/scipy/tree/master/benchmarks/benchmarks/go_benchmark_functions

eriknw avatar Apr 19 '17 14:04 eriknw

This may have some useful machine learning hyper-parameter benchmarks:

http://automl.chalearn.org/

eriknw avatar Apr 20 '17 20:04 eriknw