scikit-learn-extra icon indicating copy to clipboard operation
scikit-learn-extra copied to clipboard

AdaBoostStumpsSampler

Open glevv opened this issue 4 years ago • 2 comments

MC approximation of AdaBoost stump kernel #119

glevv avatar Jun 27 '21 10:06 glevv

Thank you @GLevV for this, some comments:

  • Could you include a short description and explanation of the difference between this and fastfood in the documentation ?
  • Also, I think on some datasets this may not be a good idea to use a scaling of 1/max(|x|) as this is can vary a lot. More generally I personally would use one of scikit-learn scalers on the data (for instance StandardScaler, what you propose if MinMaxScaler). Maybe you could include a scale_X which can be True or False among the parameters and by default this would use the StandardScaler which is the most common scaler ? THe best preprocessing really depends on the dataset so it should not be fixed in the algorithm.

Otherwise LGTM, thanks.

TimotheeMathieu avatar Jul 09 '21 15:07 TimotheeMathieu

They are completely different kernels and methods of computing them. Stump kernel was presented in Support Vector Machinery for Infinite Ensemble Learning, but it could be hard to compute exactly, so in the paper Uniform Approximation of Functions with Random Bases MC approximation was proposed (the same paper where MC approximation of RBF kernel - RBFSampler - is described). I think that StumpKernelSampler/StumpSampler should be shorter and more consistent name (similar to RBFSampler).

As for the scaling, I think it is possible to remove it altogether and let users build their own pipelines (obviously stating in the docs that this method requires scaling). It will be consistent with other kernel methods/approximations (RBFSampler also requires scaling to give proper approximation) and original formulation in the paper.

glevv avatar Jul 30 '21 12:07 glevv

Closed due to inactivity

glevv avatar Jan 10 '23 09:01 glevv