mlr3fselect
                                
                                 mlr3fselect copied to clipboard
                                
                                    mlr3fselect copied to clipboard
                            
                            
                            
                        Feature selection package of the mlr3 ecosystem.
mlr3fselect 
Package website: release | dev
mlr3fselect is the feature selection package of the mlr3 ecosystem. It selects the optimal feature set for any mlr3 learner. The package works with several optimization algorithms e.g. Random Search, Recursive Feature Elimination, and Genetic Search. Moreover, it can automatically optimize learners and estimate the performance of optimized feature sets with nested resampling. The package is built on the optimization framework bbotk.
Resources
There are several section about feature selection in the mlr3book.
- Getting started with wrapper feature selection.
- Do a sequential forward selection Palmer Penguins data set.
- Optimize multiple performance measures.
- Estimate Model Performance with nested resampling.
The gallery features a collection of case studies and demos about optimization.
- Utilize the built-in feature importance of models with Recursive Feature Elimination.
- Run a feature selection with Shadow Variable Search.
- Feature Selection on the Titanic data set.
The cheatsheet summarizes the most important functions of mlr3fselect.
Installation
Install the last release from CRAN:
install.packages("mlr3fselect")
Install the development version from GitHub:
remotes::install_github("mlr-org/mlr3fselect")
Example
We run a feature selection for a support vector machine on the Spam data set.
library("mlr3verse")
tsk("spam")
## <TaskClassif:spam> (4601 x 58): HP Spam Detection
## * Target: type
## * Properties: twoclass
## * Features (57):
##   - dbl (57): address, addresses, all, business, capitalAve, capitalLong, capitalTotal,
##     charDollar, charExclamation, charHash, charRoundbracket, charSemicolon,
##     charSquarebracket, conference, credit, cs, data, direct, edu, email, font, free,
##     george, hp, hpl, internet, lab, labs, mail, make, meeting, money, num000, num1999,
##     num3d, num415, num650, num85, num857, order, original, our, over, parts, people, pm,
##     project, re, receive, remove, report, table, technology, telnet, will, you, your
We construct an instance with the fsi() function. The instance
describes the optimization problem.
instance = fsi(
  task = tsk("spam"),
  learner = lrn("classif.svm", type = "C-classification"),
  resampling = rsmp("cv", folds = 3),
  measures = msr("classif.ce"),
  terminator = trm("evals", n_evals = 20)
)
instance
## <FSelectInstanceSingleCrit>
## * State:  Not optimized
## * Objective: <ObjectiveFSelect:classif.svm_on_spam>
## * Terminator: <TerminatorEvals>
We select a simple random search as the optimization algorithm.
fselector = fs("random_search", batch_size = 5)
fselector
## <FSelectorRandomSearch>: Random Search
## * Parameters: batch_size=5
## * Properties: single-crit, multi-crit
## * Packages: mlr3fselect
To start the feature selection, we simply pass the instance to the fselector.
fselector$optimize(instance)
The fselector writes the best hyperparameter configuration to the instance.
instance$result_feature_set
##  [1] "address"           "addresses"         "all"               "business"         
##  [5] "capitalAve"        "capitalLong"       "capitalTotal"      "charDollar"       
##  [9] "charExclamation"   "charHash"          "charRoundbracket"  "charSemicolon"    
## [13] "charSquarebracket" "conference"        "credit"            "cs"               
## [17] "data"              "direct"            "edu"               "email"            
## [21] "font"              "free"              "george"            "hp"               
## [25] "internet"          "lab"               "labs"              "mail"             
## [29] "make"              "meeting"           "money"             "num000"           
## [33] "num1999"           "num3d"             "num415"            "num650"           
## [37] "num85"             "num857"            "order"             "our"              
## [41] "parts"             "people"            "pm"                "project"          
## [45] "re"                "receive"           "remove"            "report"           
## [49] "table"             "technology"        "telnet"            "will"             
## [53] "you"               "your"
And the corresponding measured performance.
instance$result_y
## classif.ce 
## 0.07042005
The archive contains all evaluated hyperparameter configurations.
as.data.table(instance$archive)
##     address addresses   all business capitalAve capitalLong capitalTotal charDollar charExclamation
##  1:    TRUE      TRUE  TRUE     TRUE       TRUE        TRUE         TRUE       TRUE            TRUE
##  2:    TRUE      TRUE  TRUE    FALSE      FALSE        TRUE         TRUE       TRUE            TRUE
##  3:    TRUE      TRUE FALSE    FALSE       TRUE        TRUE         TRUE       TRUE            TRUE
##  4:    TRUE      TRUE  TRUE     TRUE       TRUE        TRUE         TRUE       TRUE            TRUE
##  5:   FALSE     FALSE FALSE    FALSE      FALSE       FALSE        FALSE       TRUE           FALSE
## ---                                                                                                
## 16:   FALSE     FALSE FALSE    FALSE      FALSE       FALSE        FALSE      FALSE           FALSE
## 17:   FALSE     FALSE FALSE     TRUE       TRUE        TRUE        FALSE      FALSE            TRUE
## 18:   FALSE     FALSE  TRUE     TRUE      FALSE       FALSE        FALSE       TRUE           FALSE
## 19:    TRUE      TRUE  TRUE     TRUE      FALSE        TRUE         TRUE       TRUE            TRUE
## 20:    TRUE     FALSE  TRUE    FALSE      FALSE        TRUE        FALSE       TRUE           FALSE
## 56 variables not shown: [charHash, charRoundbracket, charSemicolon, charSquarebracket, conference, credit, cs, data, direct, edu, ...]
We fit a final model with the optimized feature set to make predictions on new data.
task = tsk("spam")
learner = lrn("classif.svm", type = "C-classification")
task$select(instance$result_feature_set)
learner$train(task)