Julien Jerphanion
Julien Jerphanion
> It would be pretty neat if consumers like scikit-learn could work with the array API. This prevents a copy to numpy. This is WIP! See https://github.com/scikit-learn/scikit-learn/issues/22352 for a general...
Hi @blackgirlbytes, Thank you for having created this space for maintainers! > - [x] What is the name of the project you maintain? scikit-learn > - [x] What is the...
I am coming back to this issue because I need a proper benchmark suite for all the `PairwiseDistancesReductions` for several pull requests. Yet, it feels like it would be better...
I just have added minimal changes to tests and documentation for user API. I think we can tests combinations of sparse and dense datasets more thoroughly and systematically. This necessitates...
The previous back-end is sometimes more performant because it manages to use the same decomposition for chunks of the Squared Euclidean distance matrix used GRMM in the dense case, namely:...
> Before merging this, what do you think of https://github.com/scikit-learn/scikit-learn/pull/23585#discussion_r968419571 ? Oh, I already have accepted this sensible suggestion in fcf15b65fc51461d43c3bb14d78f2236f06d44fb. :slightly_smiling_face:
Thank you for the reviews, @ogrisel, @thomasjpfan and @Micky774!
Can't we just have both variation (i.e. a plot for predicted values vs. actual values and a plot for residuals)? To me, each one has benefits that the other hasn't.
I confirm that I find numpy aliases (i.e. `cnp.{float32, cnp.float64}_t` and `np.{float32, np.float64}`) nicer to work with as they are explicit.
Thinking again, I think having consistency and uniformity for types and fused types definitely makes sense. Yet, I am not entirely sure directly using concrete types is the best option...