tune
tune copied to clipboard
Tools for tidy parameter tuning
## Feature In situations when I'm exploring multiple preprocessors & models it can be really frustrating to run code such as: ``` results workflow_map( resamples = cv_folds, fn = "tune_grid",...
As far as I understand: - Bayesian Optimization is available in tidymodels - tune using the function `tune_bayes`, and - the `tune_bayes` function uses `GPfit::GP_fit()` to fit a Gaussian Process...
Normally with xgboost `fit(..., verbose=1)`, I get a line for each tree and the evaluation metric. With `tune_grid(control=control_grid(verbose=T))`, I get messages for each model tried, but not the xgboost output....
I think these could all be identical, would be nice to remove duplication. Additionally, I think we could clarify a little bit too.
When unnamed arguments are passed to `...`, the warning message is a bit 🤨 ``` r library(tidymodels) # mistakenly forget to pass metrics argument name fit_resamples( linear_reg(), mpg ~ .,...
```r bt_tune_grid % tune_grid(resamples=comb4_set, grid=20, control=control_grid(verbose=T), metrics=metric_set(roc_auc)) bt_bayes % tune_bayes(comb4_set, initial=bt_tune_grid, objective="roc_auc") ``` `tune_grid` is able to finalize `mtry()` itself, but `tune_bayes` fails with ``` A | error: ℹ In...
Right now functions are split into `Functions for tuning` and `Developer functions`, but I think we can do better. 2 minute idea: - fit many models - `tune_grid()` etc etc...
## The problem When implementing a logistic regression with glmnet, I encounter two issues that I believe to be related. The reproducible example below showcases both issues. The issues arise...
This following code feels a little smelly. Is there a reason why we don't add these as `collect_metrics.last_fit()` and `collect_notes.last_fit()`? https://github.com/tidymodels/tune/blob/3aa70758af57a7db9197086aac3b3e38e024c012/R/collect.R#L429-L432 https://github.com/tidymodels/tune/blob/3aa70758af57a7db9197086aac3b3e38e024c012/R/collect.R#L591-L594
This was defined before there was a `as_tibble.metric_set()`. It gives back the same information but in a slightly different format. We should replace it with the `as_tibble()` method.