tune
tune copied to clipboard
Tools for tidy parameter tuning
Once we've decided on a format for `method` in https://github.com/tidymodels/workflows/issues/233, we ought to decide how to check `add_tailor(method)` against the method that can be automatically deduced from the resamples. https://github.com/tidymodels/tune/blob/10798b9e8f3b208bec803f9eb8e0bd63db41325e/R/grid_code_paths.R#L401-L404
tune currently uses the specified metric sets to determine what types of predictions are being made. For example, for a binary classification model, we might only request "class metrics" (e.g....
If a post process parameter is marked for tuning, we should check what outcomes it changes and then cross-check with the metrics being measured. For example, if someone is optimizing...
> More than one set of outcomes were used when tuning. This should never happen. Review how the outcome is specified in your model. We should list the names of...
A revival of #479, reprex whittled down from tidymodels/finetune#116, will also fix tidymodels/yardstick#514 when addressed. When using socket cluster parallelism (notably, this not an issue with forking), workers can't find...
I had a conversation at conf with someone who mentioned an issue I’ve had. When you have a large data set or a workflow set with many different workflows, the...
## Feature request: Add full support for LOOCV Leave-one-out cross-validation is implemented in `loo_cv`, but can't be used with the rest of the `tidymodels` framework, e.g. for computing metrics. There...
I'm building a pipeline to optimize a recipe and model, both with tunable hyperparameters that require range finalization. I haven't found guidance online specific to this case in SO or...
``` r library(tidymodels) lm_mod % parsnip::set_engine("lm") wflow % add_model(lm_mod) wflow_1 % add_variables(outcomes = "mpg", predictors = c(wt)) outcome_names(wflow_1) #> Error in UseMethod("outcome_names"): no applicable method for 'outcome_names' applied to an...