scikit-learn_bench icon indicating copy to clipboard operation
scikit-learn_bench copied to clipboard

dataset sizes for benchmarks

Open amueller opened this issue 6 years ago • 4 comments

It would be great if you could do benchmarks with different data set sizes and with tall, wide and parse data, where possible, and report where these are not supported for your solvers.

amueller avatar Jul 13 '19 20:07 amueller

It would also be great to have the absolute times, not only the relative times. Some of these algorithms take .5s. In that case our input validation overhead probably is possibly dominating the work.

amueller avatar Jul 13 '19 20:07 amueller

Hi @amueller,

We can definitely try both tall and wide data and report absolute timing. As for input validation, we disable it entirely here. That basically calls sklearn.set_config(assume_finite=True).

Currently, sparse inputs will always cause our patches to fall back to scikit-learn or convert the sparse matrix to a dense one.

bibikar avatar Jul 23 '19 20:07 bibikar

@bibikar enabling assume_finite is definitely the right way to go. Still, I don't expect anything that takes .5s to be optimized in sklearn. Can you run something that takes like 10s or 1m?

amueller avatar Jul 23 '19 22:07 amueller

And again, it's also an issue of how you display the results. I'm much more likely to believe a speedup of 20x from .1s to 0.005s than from 1h to 3m. If something is instantaneous, we don't really try to optimize much more usually.

amueller avatar Jul 23 '19 22:07 amueller

In last several years datasets sizes get more variety and we are working on including more datasets with introduction of GPU support

napetrov avatar May 16 '23 11:05 napetrov