category_encoders icon indicating copy to clipboard operation
category_encoders copied to clipboard

Large scale benchmark

Open wdm0006 opened this issue 7 years ago • 21 comments

It would be great to use something like this: https://github.com/EpistasisLab/penn-ml-benchmarks to get a comprehensive view of memory usage, time-to-transform, and end-model accuracy for all encoders.

It'd probably take a very long time to compute, but could be re-run once per release or something like that.

wdm0006 avatar Dec 29 '17 00:12 wdm0006

This repo might be a useful resource to pull code from. We've been running sklearn benchmarks over there and published the results on sklearn classifiers in this paper. You can find the code for the preprocessor benchmark that I've been running with sklearn preprocessors here.

rhiever avatar Dec 29 '17 01:12 rhiever

@rhiever: PMLB is awesome! However, do you/can you provide datasets with unprocessed categorical attributes? When I looked at the repository, all categorical attributes were already encoded with one-hot or ordinal encoding.

janmotl avatar Jul 24 '18 18:07 janmotl

None of the datasets in PMLB have had a one-hot encoding applied. All datasets have had a LabelEncoder applied to columns with non-numeric values.

rhiever avatar Jul 27 '18 15:07 rhiever

I wrote a draft of the benchmark and it is at: ~~https://github.com/janmotl/categorical-encoding/tree/binary/examples/benchmarking_large~~ Edit: In the master branch under examples/benchmarking_large.

What it does: It takes 65 datasets and applies different encoders and classifiers on them. The benchmark then returns a csv file with training and testing accuracies (together with other metadata).

Some feedback?

janmotl avatar Aug 01 '18 09:08 janmotl

@janmotl this is cool, would it be possible to add time-to-train and peak overall memory usage to the output from the benchmark?

wdm0006 avatar Aug 04 '18 23:08 wdm0006

@wdm0006 I added memory consumption of the encoders. The code utilizes memory_profiler. However, I am not overly happy with the deployment of memory_profiler because it heavily impacts the runtime and, in my environment, it also breaks debug mode and parallelism.

Time-to-train of the whole pipeline is logged as fit_time. Time-to-train of the encoder alone is logged as fit_encoder_time.

janmotl avatar Aug 07 '18 11:08 janmotl

One concern with the benchmark is that no parameter tuning is performed. One finding from our recent sklearn benchmarking paper is that the sklearn defaults are almost always bad, and parameter tuning is almost always beneficial. In terms of measuring predictive performance, it is likely that parameter tuning is important here.

Another concern with the benchmark is that it seems to use the k-fold CV score as the test score. That may not be a problem here because parameter tuning is not performed, but if parameter tuning is added then it is possible for models/preprocessors with more parameters will have more chances to achieve a high score on the dataset.

Lastly, IMO returning the training score is probably pointless. That's the score the model achieves on the training data after training on the training data, so most of the time it will be ~100%.

rhiever avatar Aug 07 '18 18:08 rhiever

@rhiever I am concerned about the parameter tuning as well. However, I am more concerned about the parameters of the encoders than of the classifiers (simply because of the orientation of categorical-encoding library). My plan is to use the recommended settings of the classifiers from the referenced paper where available and only tune the parameters of the encoders. Do you have a recommended setting for the classifiers not mentioned in Table 4?

Good point. Can you recommend a solution to the issue?

Comparison of the training and testing score can be used for assessment/illustration of the overfitting - encoders like LeaveOneOutEncoder or TargetEncoder may potentially contribute to overfitting. In a fatal case, the classifier may have 100% accuracy on the training data and worse than random on the testing data. Hence, the code logs both, training and testing scores.

janmotl avatar Aug 07 '18 19:08 janmotl

The parameters recommended in Table 4 are a fine starting point, but as we suggest in the paper, algorithm parameter tuning (even a small grid search) should always be performed for every new problem.

wrt addressing the second issue I raised, the most popular solution is to use nested k-fold CV: within each training fold, perform k-fold CV for the parameter tuning. See this example.

rhiever avatar Aug 07 '18 23:08 rhiever

I have uploaded a csv with the results.

Brief observations:

  1. OneHotEncoding is, on average, the best encoder (at least based on testing AUC).
  2. Each of the remaining encoders (out of the tested one) is on some datasets better than OneHotEncoding.

Notes:

  1. Parameter tuning was not performed.
  2. Peak memory consumption was not measured.
  3. Benchmark runtime on my laptop is ~24 hours (the csv reports average runtimes per fold, not sum, plus there is also some overhead like score calculation).

janmotl avatar Aug 09 '18 12:08 janmotl

Here are box plots of the results grouped just by encoder. Across the board, BinaryEncoder & OneHotEncoder seem to be the top-performing encoders, although there may not be statistically significant differences there. HashingEncoder seems to be the worst on average.

%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd

results_df = pd.read_csv('https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv')

plt.figure(figsize=(15, 9))
sb.boxplot(data=results_df, x='encoder', y='test_auc', notch=True)
plt.grid(True, axis='y')

encoder-boxplot

Likely worth digging further into this data to gain some better insights.

rhiever avatar Aug 24 '18 14:08 rhiever

Here's the results grouped by encoder + classifier.

%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd

results_df = pd.read_csv('https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv')

plt.figure(figsize=(12, 12))
for index, clf in enumerate(results_df['model'].unique()):
    plt.subplot(3, 3, index + 1)
    plt.title(clf)
    sb.boxplot(data=results_df.loc[results_df['model'] == clf], y='encoder', x='test_auc', notch=True)
    plt.grid(True, axis='x')
    plt.ylabel('')
    if index < 6 != 0:
        plt.xlabel('')
    if index % 3 != 0:
        plt.yticks([])
    plt.tight_layout()
    plt.xlim(0.4, 1.0)

encoder-clf-boxplot

rhiever avatar Aug 24 '18 14:08 rhiever

And here's grouping the other way around.

%matplotlib inline
import matplotlib.pyplot as plt
import seaborn as sb
import pandas as pd

results_df = pd.read_csv('https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv')

plt.figure(figsize=(12, 12))
for index, clf in enumerate(results_df['encoder'].unique()):
    plt.subplot(3, 3, index + 1)
    plt.title(clf)
    sb.boxplot(data=results_df.loc[results_df['encoder'] == clf], y='model', x='test_auc', notch=True)
    plt.grid(True, axis='x')
    plt.ylabel('')
    if index < 6 != 0:
        plt.xlabel('')
    if index % 3 != 0:
        plt.yticks([])
    plt.tight_layout()
    plt.xlim(0.4, 1.0)

clf-encoder-boxplot

rhiever avatar Aug 24 '18 15:08 rhiever

Good stuff. Thanks for doing this.

Getting a 404 when I try to see the csv of results at https://raw.githubusercontent.com/janmotl/categorical-encoding/binary/examples/benchmarking_large/output/result_2018-08-09.csv

discdiver avatar Aug 31 '18 01:08 discdiver

Updated results are now in PR #110 (link).

Notable changes:

  1. Added Weight of Evidence encoder.
  2. Impact encoders (Target encoder, Leave One Out and Weight of Evidence) should now correctly apply the corrections on the training data. This required a complete overhaul of the benchmarking code because scikit pipelines are not compatible with transformers that accept both, X and y.
  3. Removed datasets that contained only numerical attributes as they were not contributing to the benchmark and they were merely increasing runtime.

janmotl avatar Sep 02 '18 09:09 janmotl

Awesome @janmotl. Here's the latest performance chart. Interesting that WOE and LOO performed poorly. screen shot 2018-09-03 at 11 08 29 pm

Why aren't the contrast encoders included in the analysis?

discdiver avatar Sep 04 '18 03:09 discdiver

Yes, LOO and WOE overfit particularly with decision tree, gradient boosting and random forest.

Unfortunately, the graphs are not directly comparable because they are based on different subset of datasets.

Contrast encoders are not included because of issue #91.

janmotl avatar Sep 04 '18 06:09 janmotl

Thanks @janmotl. It's interesting Target doesn't overfit, too.

Is it worth running all available encoders on the same subset only?

I would argue some encoders are only appropriate to ordinal or nominal features, so a blanket test like this probably doesn't really make theoretic sense, although it would be nice if it did.

discdiver avatar Sep 04 '18 15:09 discdiver

I rerun the benchmark on older versions of the code. And by applying bisection method, it turned out that following code in LOO:

def fit_transform(self, X, y=None, **fit_params):
    """
    Encoders that utilize the target must make sure that the training data are transformed with:
            transform(X, y)
    and not with:
            transform(X)
    """
    return self.fit(X, y, **fit_params).transform(X, y)

causes significant degradation of testing AUC (e.g. in case of decision trees from ~0.9 to ~0.5). Ironically enough, these lines were added into the code to activate leave-one-out functionality (see issue #116).

janmotl avatar Sep 09 '18 16:09 janmotl

I wrote a draft of the benchmark and it is at: https://github.com/janmotl/categorical-encoding/tree/binary/examples/benchmarking_large

What it does: It takes 65 datasets and applies different encoders and classifiers on them. The benchmark then returns a csv file with training and testing accuracies (together with other metadata).

Some feedback?

@janmotl Nice work on the benchmarking, do you have an updated link for the description?

eddiepyang avatar Nov 03 '18 14:11 eddiepyang

@eddiepyang The benchmark is now in this repository under examples/benchmarking_large.

janmotl avatar Nov 03 '18 16:11 janmotl