Jérémie du Boisberranger
Jérémie du Boisberranger
It is kind of weird to have this behavior only for the constant strategy. I can't remember the motivation behind that and reading the original PR doesn't make it clearer....
Let's merge and hopefully we won't forget to clean it up when we fix the inconsistency in the SimpleImputer directly :smile:
> @jeremiedbb if you still have a Windows machine handy, it would be great to collect your output as well (and under Linux too). I do but I'm not next...
Here are the results on my linux laptop (8 physical cores, intel). My OpenMP implem is `libomp`. I tried to have as few activity on the background as possible. ...
And here are the results on my windows machine (8 physical cores, amd). My OpenMP implem is `vcomp`.   
To me property 2. is what people generally think sample weight are, so we should ensure that. Property 1 is property 2 with N=0. I can't say much about property...
Or it means `min_sample_leaf` is not well defined and should probably be "makes sure each leaf has at least x% samples (or weights)". ( mostly kidding but not entirely :)...
I updated the list with the results from the notebook https://gist.github.com/snath-xoc/fb28feab39403a1e66b00b5b28f1dcbf and from the extended common test in https://github.com/scikit-learn/scikit-learn/pull/29818. Note that the notebook only deals with classifiers/regressors for now. It...
We could add an option to disable convergence warnings (or warning of a given category) in the global config and define our own `warn` function that would check the global...