Christian Lorentzen
Christian Lorentzen
> do we want to get it to the finish line +1
@zshu115x Could you "resolve conversation" for all review comments that are addresses?
There are still some open old review comments. Can they be marked resolved?
Now that I opened #30155, I prefer it over this one. In particular the scaling of the L2 penalty alpha and partially reinventing the common tests for sw. (And some...
About scaling of L2 penalty terms, see #15657. There is not single "correct" way of doing it. Here in MLP, the objective currently is: `obj = 1/n * sum(loss) +...
And before we change many things here, I still prefer #30155, where the work has already been done (while this PR was kind of stale).
@zshu115x First of all, I'm really grateful for your work on this PR. Don't get the wrong impression. Then, with the approaching 1.6 release, I thought it would good to...
## Benchmark As of https://github.com/scikit-learn/scikit-learn/pull/28840/commits/1de85b77c11fe41b496333839bd92cd05317baf5 `X_train.shape = (10000, 75)` `sparse.issparse(X_train)=False` `n_classes=12`  ```python import warnings from pathlib import Path import numpy as np from scipy import sparse from sklearn._loss import...
> nitpick: I think Hessian should always be capitalized in the docstrings and comments. That's right, but not the standard in our code base. If you wish to correct that,...
@agramfort @TomDLT @rth friendly ping in case you find time. IMO, this PR closes a gap in the linear model solvers and enables precision solutions with unprecedented speed (orders of...