Li Jiang
Li Jiang
> And N=10000 (with N=10 the issue is not reproducible). > > To my opinion the issue happens in large data-sets since FLAML_sample_size is not included in the best_config_per_estimator dict....
> I was expecting am1.best_loss >= am2.best_loss > > > > Given am2 warm-start starts from the best of am1 and improves (or not). Do I misunderstand this ? It's...
> I have look at all lines containing starting_points in automl.py and Iam > > not sure if this excerpt from automl.py > > [starting_points: A dictionary or a str...
Hi @dannycg1996 , @jmrichardson , for catboost, we always set the `n_estimators` to 8192 and apply early stop for the fit function. Early stop could be triggered in lgbm as...
Hi @Ganchoalhigado666 , could you please elaborate the issue? Thanks.
Also, please use [pre-commit](https://microsoft.github.io/autogen/docs/Contribute#pre-commit) to format the code.
Thank you @dannycg1996 , @drwillcharles . For classification, we want to make sure the labels are complete in both training and validation data, thus we'll concat the first instance of...
Hi @lucazav , thank you for reporting this. Currently, we simply retrieve the `feature_importances_` from the model, since StackingRegressor doesn't have `feature_importances_`, we set it to None in FLAML. I...
It should be supported now. We are closing this issue due to inactivity; please reopen if the problem persists.
Thank you @bl3e967 for reporting it. Would you like to raise a PR?