David Holzmüller

Results 28 comments of David Holzmüller

I used classification error in the paper but I used logloss and Brier score to assess temperature scaling in the results I was referring to. I can try to dig...

Alright, some numbers on the multi-class datasets of our meta-train benchmark (XGB-TD is XGBoost with our meta-learned default parameters): Arithmetic mean log-loss: - XGB-TD + temperature scaling: 0.2527 - XGB-TD...

Thank you. I was looking for a parameter analogous to base_score in xgboost, and was hoping I could use the bias for this. But if you don't want to support...

Here is an implementation of the robust scaling + smooth clipping from my paper (https://arxiv.org/abs/2407.04491). It could be used as a robust numerical preprocessing methods for neural network based methods....

> Also, I think having a smallish example that shows when and where to use your class would be great. WDYT? Sounds good, where would you put it?

I addressed everything except from the example now.

I removed the test that tested if a column with name `None` after transformation still had name `None`, since it failed for polars. I assume this is not relevant? (The...

We could try it on a dataset with an MLP and see if it improves something (compared to StandardScaler? or QuantileTransformer?). Don't know what would be the best one to...

Updates: - renamed the class to SquashingScaler - made it work for dataframes and not only series by using OnEachColumn with the old class - renamed the args to lower_quantile...

@GaelVaroquaux was suggesting to implement SquashingScaler directly instead of with the indirection through SingleColumnSquashingScaler. I'll need to check how easy that is...