Christian Lorentzen

Results 394 comments of Christian Lorentzen

So currently, there is no critical voice, but still time to raise one. Please do so. Let's see how contributors would prefer to add narwhals: - 🚀 add narwhals as...

I have a question about API: Wouldn't it be something completely new for users to specify some amount of memory (RAM) limit? I would prefer to have a much simpler,...

The origin of the "friedman_mse" is the paper "Greedy function approximation: A gradient boosting machine." by Jerome H. Friedman [doi:1013203451](https://doi.org/10.1214/AOS/1013203451) around Eq 35. He even mentions that this is the...

Just read (or re-read) "Greedy Function Approxiations" again, in particular section 4.6 (and 4.5 and the reference "Additive logistic regression"). The point is that both cases use squared error, but...

> Ok but if both uses squared error, then the splitting criterion should be "squarred_error" and "friedman_mse" has no reason to exist (and anyway it computes exactly the same thing...

@cakedev0 I think there is no difference of squared error and Friedman squared error: `GradientBoostingClassifier` implements 1. order gradient descent in function space according to [1]. There are no weights...

> Although, note that HGB does line search for loss functions that aren't differentiable: see this [function/docstring](https://github.com/scikit-learn/scikit-learn/blob/c7d040e4f23e7888125de0af52e640329c8b9a5a/sklearn/ensemble/_hist_gradient_boosting/gradient_boosting.py#L74-L90) in the code. Hint: look at git blame😉 The author thought about future...

To be honest, I did not like #32197 too much and I did not review it. Now that we have it, the right thing to do is to set `excluded_set`...

> You mean outside this screening function? yes, right after its initialization.