category_encoders
category_encoders copied to clipboard
Large scale benchmark
It would be great to use something like this: https://github.com/EpistasisLab/penn-ml-benchmarks to get a comprehensive view of memory usage, time-to-transform, and end-model accuracy for all encoders.
It'd probably take a very long time to compute, but could be re-run once per release or something like that.
This repo might be a useful resource to pull code from. We've been running sklearn benchmarks over there and published the results on sklearn classifiers in this paper. You can find the code for the preprocessor benchmark that I've been running with sklearn preprocessors here.
@rhiever: PMLB is awesome! However, do you/can you provide datasets with unprocessed categorical attributes? When I looked at the repository, all categorical attributes were already encoded with one-hot or ordinal encoding.
None of the datasets in PMLB have had a one-hot encoding applied. All datasets have had a LabelEncoder applied to columns with non-numeric values.
I wrote a draft of the benchmark and it is at:
~~https://github.com/janmotl/categorical-encoding/tree/binary/examples/benchmarking_large~~
Edit: In the master branch under examples/benchmarking_large
.
What it does: It takes 65 datasets and applies different encoders and classifiers on them. The benchmark then returns a csv file with training and testing accuracies (together with other metadata).
Some feedback?
@janmotl this is cool, would it be possible to add time-to-train and peak overall memory usage to the output from the benchmark?
@wdm0006 I added memory consumption of the encoders. The code utilizes memory_profiler
. However, I am not overly happy with the deployment of memory_profiler
because it heavily impacts the runtime and, in my environment, it also breaks debug mode and parallelism.
Time-to-train of the whole pipeline is logged as fit_time
. Time-to-train of the encoder alone is logged as fit_encoder_time
.
One concern with the benchmark is that no parameter tuning is performed. One finding from our recent sklearn benchmarking paper is that the sklearn defaults are almost always bad, and parameter tuning is almost always beneficial. In terms of measuring predictive performance, it is likely that parameter tuning is important here.
Another concern with the benchmark is that it seems to use the k-fold CV score as the test score. That may not be a problem here because parameter tuning is not performed, but if parameter tuning is added then it is possible for models/preprocessors with more parameters will have more chances to achieve a high score on the dataset.
Lastly, IMO returning the training score is probably pointless. That's the score the model achieves on the training data after training on the training data, so most of the time it will be ~100%.
@rhiever I am concerned about the parameter tuning as well. However, I am more concerned about the parameters of the encoders than of the classifiers (simply because of the orientation of categorical-encoding library). My plan is to use the recommended settings of the classifiers from the referenced paper where available and only tune the parameters of the encoders. Do you have a recommended setting for the classifiers not mentioned in Table 4?
Good point. Can you recommend a solution to the issue?
Comparison of the training and testing score can be used for assessment/illustration of the overfitting - encoders like LeaveOneOutEncoder or TargetEncoder may potentially contribute to overfitting. In a fatal case, the classifier may have 100% accuracy on the training data and worse than random on the testing data. Hence, the code logs both, training and testing scores.
The parameters recommended in Table 4 are a fine starting point, but as we suggest in the paper, algorithm parameter tuning (even a small grid search) should always be performed for every new problem.
wrt addressing the second issue I raised, the most popular solution is to use nested k-fold CV: within each training fold, perform k-fold CV for the parameter tuning. See this example.
I have uploaded a csv with the results.
Brief observations:
- OneHotEncoding is, on average, the best encoder (at least based on testing AUC).
- Each of the remaining encoders (out of the tested one) is on some datasets better than OneHotEncoding.
Notes:
- Parameter tuning was not performed.
- Peak memory consumption was not measured.
- Benchmark runtime on my laptop is ~24 hours (the csv reports average runtimes per fold, not sum, plus there is also some overhead like score calculation).
Here are box plots of the results grouped just by encoder. Across the board, BinaryEncoder & OneHotEncoder seem to be the top-performing encoders, although there may not be statistically significant differences there. HashingEncoder seems to be the worst on average.
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
results_df = pd.read_csv('https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv')
plt.figure(figsize=(15, 9))
sb.boxplot(data=results_df, x='encoder', y='test_auc', notch=True)
plt.grid(True, axis='y')
Likely worth digging further into this data to gain some better insights.
Here's the results grouped by encoder + classifier.
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
results_df = pd.read_csv('https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv')
plt.figure(figsize=(12, 12))
for index, clf in enumerate(results_df['model'].unique()):
plt.subplot(3, 3, index + 1)
plt.title(clf)
sb.boxplot(data=results_df.loc[results_df['model'] == clf], y='encoder', x='test_auc', notch=True)
plt.grid(True, axis='x')
plt.ylabel('')
if index < 6 != 0:
plt.xlabel('')
if index % 3 != 0:
plt.yticks([])
plt.tight_layout()
plt.xlim(0.4, 1.0)
And here's grouping the other way around.
%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd
results_df = pd.read_csv('https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv')
plt.figure(figsize=(12, 12))
for index, clf in enumerate(results_df['encoder'].unique()):
plt.subplot(3, 3, index + 1)
plt.title(clf)
sb.boxplot(data=results_df.loc[results_df['encoder'] == clf], y='model', x='test_auc', notch=True)
plt.grid(True, axis='x')
plt.ylabel('')
if index < 6 != 0:
plt.xlabel('')
if index % 3 != 0:
plt.yticks([])
plt.tight_layout()
plt.xlim(0.4, 1.0)
Good stuff. Thanks for doing this.
Getting a 404 when I try to see the csv of results at https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv
Updated results are now in PR #110 (link).
Notable changes:
- Added Weight of Evidence encoder.
- Impact encoders (Target encoder, Leave One Out and Weight of Evidence) should now correctly apply the corrections on the training data. This required a complete overhaul of the benchmarking code because scikit pipelines are not compatible with transformers that accept both,
X
andy
. - Removed datasets that contained only numerical attributes as they were not contributing to the benchmark and they were merely increasing runtime.
Awesome @janmotl. Here's the latest performance chart. Interesting that WOE and LOO performed poorly.
Why aren't the contrast encoders included in the analysis?
Yes, LOO and WOE overfit particularly with decision tree, gradient boosting and random forest.
Unfortunately, the graphs are not directly comparable because they are based on different subset of datasets.
Contrast encoders are not included because of issue #91.
Thanks @janmotl. It's interesting Target doesn't overfit, too.
Is it worth running all available encoders on the same subset only?
I would argue some encoders are only appropriate to ordinal or nominal features, so a blanket test like this probably doesn't really make theoretic sense, although it would be nice if it did.
I rerun the benchmark on older versions of the code. And by applying bisection method, it turned out that following code in LOO:
def fit_transform(self, X, y=None, **fit_params):
"""
Encoders that utilize the target must make sure that the training data are transformed with:
transform(X, y)
and not with:
transform(X)
"""
return self.fit(X, y, **fit_params).transform(X, y)
causes significant degradation of testing AUC (e.g. in case of decision trees from ~0.9 to ~0.5). Ironically enough, these lines were added into the code to activate leave-one-out functionality (see issue #116).
I wrote a draft of the benchmark and it is at: https://github.com/janmotl/categorical-encoding/tree/binary/examples/benchmarking_large
What it does: It takes 65 datasets and applies different encoders and classifiers on them. The benchmark then returns a csv file with training and testing accuracies (together with other metadata).
Some feedback?
@janmotl Nice work on the benchmarking, do you have an updated link for the description?
@eddiepyang The benchmark is now in this repository under examples/benchmarking_large
.