Philipp Probst

Results 21 comments of Philipp Probst

1.: I was thinking the same tonight. 2.: Runtime was not really an issue, as the datasets are not very big, for most of them the runtime was around 500...

Hi Janek, I have now some reproducible results. - I chose the datasets for the benchmark from OpenML and Kaggle together with a bachelor student (will probably publish this). Most...

In my [benchmark](http://philipppro.github.io/Tuning_random_forest/), it was not always better regarding the MMCE than for example tuning AUC. It had on average the lowest rank but at least for RF it sometimes...

Yes, for the ranks this is true. But difference to for example logarithmic loss is not big. Probably it would be interesting to benchmark autoxgboost with different target measures. ;)

Yes you are right. We convert it manually so we got the defaults for NA values at the moment.

I am using liquidSVM with the regression learner of mlr, newest version.

I got a bit strange results for the explorative analysis, too much is printed... [report.pdf](https://github.com/mlr-org/shinyMlr/files/2918746/report.pdf)

Speed vs some performance measures on some OpenML datasets on a good default setting. We have some defaults now in shiny mlr, this is some kind of recommendation.^^ Some algorithms...

I changed it manually with this: https://superuser.com/questions/656459/disabling-hybrid-boot-in-windows-8/879674#879674

I just watched your talk, very interesting. In my opinion one of the directions that should be further developed (and you already mentioned) is AutoML: packages for automatic tuning, automatic...