LightGBM icon indicating copy to clipboard operation
LightGBM copied to clipboard

[python-package] enable early stopping automatically in scikit-learn interface (fixes #3313)

Open ClaudioSalvatoreArcidiacono opened this issue 2 years ago • 20 comments

Fixes #3313

Implements Scikit-learn like interface for early stopping.

@ClaudioSalvatoreArcidiacono please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.

@microsoft-github-policy-service agree [company="{your company}"]

Options:

  • (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
  • (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft"

Contributor License Agreement

Contribution License Agreement

This Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”), and conveys certain license rights to Microsoft Corporation and its affiliates (“Microsoft”) for Your contributions to Microsoft open source projects. This Agreement is effective as of the latest signature date below.

  1. Definitions. “Code” means the computer software code, whether in human-readable or machine-executable form, that is delivered by You to Microsoft under this Agreement. “Project” means any of the projects owned or managed by Microsoft and offered under a license approved by the Open Source Initiative (www.opensource.org). “Submit” is the act of uploading, submitting, transmitting, or distributing code or other content to any Project, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Project for the purpose of discussing and improving that Project, but excluding communication that is conspicuously marked or otherwise designated in writing by You as “Not a Submission.” “Submission” means the Code and any other copyrightable material Submitted by You, including any associated comments and documentation.
  2. Your Submission. You must agree to the terms of this Agreement before making a Submission to any Project. This Agreement covers any and all Submissions that You, now or in the future (except as described in Section 4 below), Submit to any Project.
  3. Originality of Work. You represent that each of Your Submissions is entirely Your original work. Should You wish to Submit materials that are not Your original work, You may Submit them separately to the Project if You (a) retain all copyright and license information that was in the materials as You received them, (b) in the description accompanying Your Submission, include the phrase “Submission containing materials of a third party:” followed by the names of the third party and any licenses or other restrictions of which You are aware, and (c) follow any other instructions in the Project’s written guidelines concerning Submissions.
  4. Your Employer. References to “employer” in this Agreement include Your employer or anyone else for whom You are acting in making Your Submission, e.g. as a contractor, vendor, or agent. If Your Submission is made in the course of Your work for an employer or Your employer has intellectual property rights in Your Submission by contract or applicable law, You must secure permission from Your employer to make the Submission before signing this Agreement. In that case, the term “You” in this Agreement will refer to You and the employer collectively. If You change employers in the future and desire to Submit additional Submissions for the new employer, then You agree to sign a new Agreement and secure permission from the new employer before Submitting those Submissions.
  5. Licenses.
  • Copyright License. You grant Microsoft, and those who receive the Submission directly or indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license in the Submission to reproduce, prepare derivative works of, publicly display, publicly perform, and distribute the Submission and such derivative works, and to sublicense any or all of the foregoing rights to third parties.
  • Patent License. You grant Microsoft, and those who receive the Submission directly or indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under Your patent claims that are necessarily infringed by the Submission or the combination of the Submission with the Project to which it was Submitted to make, have made, use, offer to sell, sell and import or otherwise dispose of the Submission alone or with the Project.
  • Other Rights Reserved. Each party reserves all rights not expressly granted in this Agreement. No additional licenses or rights whatsoever (including, without limitation, any implied licenses) are granted by implication, exhaustion, estoppel or otherwise.
  1. Representations and Warranties. You represent that You are legally entitled to grant the above licenses. You represent that each of Your Submissions is entirely Your original work (except as You may have disclosed under Section 3). You represent that You have secured permission from Your employer to make the Submission in cases where Your Submission is made in the course of Your work for Your employer or Your employer has intellectual property rights in Your Submission by contract or applicable law. If You are signing this Agreement on behalf of Your employer, You represent and warrant that You have the necessary authority to bind the listed employer to the obligations contained in this Agreement. You are not expected to provide support for Your Submission, unless You choose to do so. UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING, AND EXCEPT FOR THE WARRANTIES EXPRESSLY STATED IN SECTIONS 3, 4, AND 6, THE SUBMISSION PROVIDED UNDER THIS AGREEMENT IS PROVIDED WITHOUT WARRANTY OF ANY KIND, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY OF NONINFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
  2. Notice to Microsoft. You agree to notify Microsoft in writing of any facts or circumstances of which You later become aware that would make Your representations in this Agreement inaccurate in any respect.
  3. Information about Submissions. You agree that contributions to Projects and information about contributions may be maintained indefinitely and disclosed publicly, including Your name and other information that You submit with Your Submission.
  4. Governing Law/Jurisdiction. This Agreement is governed by the laws of the State of Washington, and the parties consent to exclusive jurisdiction and venue in the federal courts sitting in King County, Washington, unless no federal subject matter jurisdiction exists, in which case the parties consent to exclusive jurisdiction and venue in the Superior Court of King County, Washington. The parties waive all defenses of lack of personal jurisdiction and forum non-conveniens.
  5. Entire Agreement/Assignment. This Agreement is the entire agreement between the parties, and supersedes any and all prior agreements, understandings or communications, written or oral, between the parties relating to the subject matter hereof. This Agreement may be assigned by Microsoft.

@microsoft-github-policy-service agree

Until we have a chance to review, you can improve the chances of this being merged by addressing any CI failures you see. You can safely ignore failing R-package jobs, there are some known issues with those (fixed in #5807 ).

jameslamb avatar Mar 26 '23 20:03 jameslamb

Hey @jameslamb thanks for picking this up. I have started to take a look at the ci failures, I think I can solve most of them easily.

There is one check for which I need some input.

I see that one of the tests checks that the init parameters for the sklearn API and the Dask API have the same arguments (see this one). Early stopping is not available yet for the Dask API, so I do not see how can we easily iron that out. Shall I add some more exceptions to that specific test?

Shall I add some more exceptions to that specific test?

@ClaudioSalvatoreArcidiacono no, please do not do that. This test you've linked to is there exactly to catch such deviations.

If you absolutely need to add arguments to the constructors of the scikit-learn estimators in this project, add the same arguments in the same order to the Dask estimators, with default values of None or something, and raise NotImplementedError in the Dask interface when any non-None values are passed to those arguments.

For what it's worth, I am not totally convinced yet that we should take on the exact same interface as scikit-learn, (especially the large added complexity of this new validation_set_split_strategy argument). #3313 was primarily about whether or not to enable early stopping by default... not explicitly about changing the signature of the lightgbm.sklearn estimators to match HistGradientBoostingClassifier.

If you are committed to getting the CI passing here I'm willing to consider it, but just want to set the right expectation that I expect you to also explain specifically the benefit of adding all this new complexity.

jameslamb avatar Apr 19 '23 01:04 jameslamb

By the way, it looks like you are not signing your commits with an email address tied to your GitHub account.

Screen Shot 2023-04-18 at 8 52 12 PM

See https://github.com/microsoft/LightGBM/pull/5532#issuecomment-1274032708 and the comments linked from it for an explanation of what I mean by that and an explanation of how to fix it.

jameslamb avatar Apr 19 '23 01:04 jameslamb

It has been about 6 weeks since I last provided a review on this PR, and there has not been any activity on it since then.

I'm closing this, assuming it's been abandoned. To make it clear to others onterested in this feature that they shouldn't be waiting on this PR.

@ClaudioSalvatoreArcidiacono if you have time in the future to work with maintainers here, we'd welcome future contributions.

jameslamb avatar May 30 '23 12:05 jameslamb

Hey @jameslamb, I did not have much time to take a look at this lately, I should be more available now and in the coming weeks.

If you also have some time to help me reviewing it I can pick this PR up again.

Regarding your previous comments, thanks for the heads up on signing commits, I will sign the next commits as you mentioned.

About the complexity of the proposed implementation, I am definitely open for feedback from the maintainers and I am willing to change the proposed implementation if necessary.

In the proposed implementation I tried to stick to what is written in the FAQ:

The appropriate splitting strategy depends on the task and domain of the data, information that a modeler has but which LightGBM as a general-purpose tool does not.

So, in the proposed implementation I tried to find a common ground between convenience of activating early stopping using only init params and customisability of the splitting strategy.

Just to set the right expectation...I personally will not be able to look at this for at least another week, and I think it's unlikely it'll make it into LightGBM 4.0 (#5952).

I'm sorry, but this is quite complex and will require a significant investment of time to review. Some questions I'll be looking to answer when I start reviewing this:

  • what does "Scikit-learn like interface" mean, precisely?
    • does it mean you've implemented exactly the same interface as HistGradientBoostingClassifier and HistGradientBoostingRegressor ? If so can you please link to code and docs showing that?
    • and more broadly, why does this need to change ANYTHING about the public interface of lightgbm? And couldn't the existing mechanisms used inside lightgbm.cv() be used instead of adding all this new code for splitting? e.g. https://github.com/microsoft/LightGBM/blob/9f78cceee4911dd56f4635dfd36d4482363db5aa/python-package/lightgbm/engine.py#L562-L564
  • what has to happen to make these changes consistent with how lightgbm currently works? For example...
    • what happens when early_stopping_rounds is passed to the estimator constructor via **kwargs and n_iter_no_change is set to a non-default value in the constructor... which value wins?
    • What happens if early_stopping=True is passed but valid_sets are also passed to .fit()? Does that disable the automatic splitting and just use the provided validation sets?

jameslamb avatar Jun 30 '23 14:06 jameslamb

Hey @jameslamb, no problem. Thanks for being transparent on your availability on this PR and for your feedback. I followed it and I made the PR easier to review now.

I have removed the functionality to use a custom splitter and I made the changes much smaller.

  • what does "Scikit-learn like interface" mean, precisely?

    • does it mean you've implemented exactly the same interface as HistGradientBoostingClassifier and HistGradientBoostingRegressor ? If so can you please link to code and docs showing that?

I have now implemented the same interface as HistGradientBoostingClassifier and HistGradientBoostingRegressor.

  • and more broadly, why does this need to change ANYTHING about the public interface of lightgbm? And couldn't the existing mechanisms used inside lightgbm.cv() be used instead of adding all this new code for splitting? e.g. https://github.com/microsoft/LightGBM/blob/9f78cceee4911dd56f4635dfd36d4482363db5aa/python-package/lightgbm/engine.py#L562-L564

In this implementation I tried to reuse the splitting function used in lightgbm.cv(). Thanks for the tip.

  • what has to happen to make these changes consistent with how lightgbm currently works? For example...

    • what happens when early_stopping_rounds is passed to the estimator constructor via **kwargs and n_iter_no_change is set to a non-default value in the constructor... which value wins?

Good observation, I think it is indeed needed to rename the arguments so that they will be more consistent with LightGBM naming conventions.

  • What happens if early_stopping=True is passed but valid_sets are also passed to .fit()? Does that disable the automatic splitting and just use the provided validation sets?

Correct.

Hey @jameslamb thanks a lot for your review. I am still interested in solving this issue, but I will be on holidays for the next two weeks. I will take a look at your comments once I am back.

No problem, thanks for letting us know! I'll be happy to continue working on this whenever you have time.

jameslamb avatar Aug 21 '23 15:08 jameslamb

Hey @jameslamb, Thanks again for your review comments! I was aware of the aliases mechanism but I did not fully understand how it worked. Your comment really helped me in understanding them.

I have worked on your feedback and I think now the PR is in good shape.

In this implementation I tried to stick to the scikit-learn interface of HistGradientBoostingClassifier, so the parameter early_stopping is 'auto' by default.

Since we are not adding early_stopping_round to the public interface, the default value for early_stopping_round is somewhere else in the code, I think it would be better to create a constant in the sklearn.py file where we set the default value, happy to hear alternative solutions from you.

Lastly, we should still mention somewhere in the documentation that early stopping is enabled by default in the scikit-learn interface of LightGBM and also we should mention how to forcefully disable it. Where would you suggest to add it?

Lastly, we should still mention somewhere in the documentation that early stopping is enabled by default in the scikit-learn interface of LightGBM and also we should mention how to forcefully disable it. Where would you suggest to add it?

I think here would be appropriate:

https://github.com/microsoft/LightGBM/blob/921479b99fb5b691801e0e794f2196a94ea17d79/docs/Python-Intro.rst#L223

jameslamb avatar Sep 13 '23 14:09 jameslamb

Lastly, we should still mention somewhere in the documentation that early stopping is enabled by default in the scikit-learn interface of LightGBM and also we should mention how to forcefully disable it. Where would you suggest to add it?

I think here would be appropriate:

https://github.com/microsoft/LightGBM/blob/921479b99fb5b691801e0e794f2196a94ea17d79/docs/Python-Intro.rst#L223

Thanks, I have added something there. Shall we also mention something here?

Hey @jameslamb, I think the PR is ready for a second look, please let me know if there are some Changes you would like me to do :)

Thanks for returning to this.

@jmoralez could you take the next round of reviews on this?

jameslamb avatar Jan 30 '24 01:01 jameslamb

The original issue (#3313) requested the possibility of doing early stopping with the scikit-learn API by specifying arguments in the constructor, not automatically performing early stopping. I understand that's what HistGradientBoosting(Classifier|Regressor) do, but I think we could also consider doing what GradientBoosting(Classifier|Regressor) do, where the default is not to do it but having the arguments in the init signature to support it. Otherwise this would be a silently breaking change (I know we would list it as a breaking change but it could confuse people who don't read the release notes).

I would like to have this clear before reviewing, since we would be doing many things behind the user's back (automatically enabling early stopping for >10,000 rows, stratified if classification, setting the number of folds, shuffling). I would be more comfortable if it was an explicit decision (like setting early_stopping_rounds>0 in the params for example).

jmoralez avatar Feb 01 '24 20:02 jmoralez

Hey @jmoralez, thanks for your comment. I agree with you, my preference would be for having early stopping off by default and to have it on only when it is explicitly set by an init parameter. This was also my original implementation. I have changed it due to this comment from @jameslamb.

Could the two of you maybe agree on what would you like to see as the default behaviour :)?

What are your thoughts on this (enabling early stopping by default) @borchero?

jmoralez avatar Feb 02 '24 18:02 jmoralez

What are your thoughts on this (enabling early stopping by default) @borchero?

Sorry, I only saw this comment now 🫣 I wouldn't enable it by default as (1) it is not integral to use for boosted trees and (2) early stopping is not supported for all boosting strategies (I recently learnt that dart does not support it) which might cause confusion.

borchero avatar Jun 18 '24 19:06 borchero