James Lamb
James Lamb
Thanks very much for the detailed explanation. To set the right expectation, I personally will need to do some significant reading on these topics to be able to provide a...
Thank you very much for simplifying this to reduce the code duplication @shiyu1994 !
Thanks for using LightGBM. You're right that given two `Booster` objects, one of which was called by running `.refit()` on the other, the same sample should fall into the exact...
Thanks for using LightGBM, and for taking the time to open an excellent report with a reproducible example! It really helped with the investitation. Running your reproducible example with the...
Some other minor notes... > *we regularly train millions of models (with the same hyper-parameter set) and cannot guarantee that the amount of training samples exceeds 1 for all of...
> *A few more words on our application* This is very very interesting, thanks so much for the details! And thanks for choosing LightGBM for this important application, we'll do...
Ok great, thanks for the excellent report and for sharing so much information with me! We'll leave this open to track the work I suggested in https://github.com/microsoft/LightGBM/issues/6622#issuecomment-2314217758. Any interest in...
No problem! Thanks again for the great report and interesting discussion. We'll work on a fix for this.
I am not the best first reviewer for this. As I said in my comment 2 weeks ago (https://github.com/microsoft/LightGBM/pull/6569#pullrequestreview-2216131312), I'm hoping that @shiyu1994 or @guolinke will have time to help...
> *the `tests/python_package_test/test_dask.py` seems to have previously been a no-op since the 2 models produced with and without an init scores are the same for the classifier case.* I'll investigate...