Christian Lorentzen
Christian Lorentzen
@scikit-learn/core-devs ping for a decision. I still stand by my comment https://github.com/scikit-learn/scikit-learn/issues/28574#issuecomment-1987342858, therefore -1 (until someone can show a clear improvement).
@adrinjalali I don’t follow. The one literature I cited just gives a good & recent overview of the topic, in fact it advertises post-hoc calibration. My point is that (post...
> it seems empirically this brings value on non-NN algorithms as well, if I read this thread correctly. Could you please point me to it because I have not found...
@dholzmueller is it correct that you used classification error? Could you show/produce results for log loss? Log loss (and also squared error alias Brier score) is a much better metric...
First, note that the GBDT has an `init` parameter. So you can „gradient stack“ at least 2 estimator types (1 arbitrary + trees). Then to the feature request itself: Very...
> the one way should be the method argument. I tend to agree with @rkern on this one. It seems the cleanest way and avoids mistakes of users. For me...
I'd like to add some arguments and hope they are appreciated. #### How to pass rng? For me, the worst thing would be to have 2 ways of passing rng...
This feature sound reasonable to me. It would mean allowing for - `plot_marginal(y_obs=None, ..)` - `compute_marginal(y_obs=None, ..)` PR welcome. I would recommend to start with the latter.
How about opening this issue again or is it solved (by which PR then)?
Our `_weighted_precentile` implements method `inverted_cdf`. It is important to note that quantiles are interval valued quantities. Out of that interval, one is free to choose any value. On top of...