Chi Wang
Chi Wang
A couple of thoughts: 1. We save each configuration in a log file when `log_file_name` is specified. Then the corresponding configuration can be retrained afterwards using `AutoML.retrain_from_log()`. 2. When mlflow...
The error message is https://user-images.githubusercontent.com/97145738/184356110-52f5f468-2d53-4e7d-bfea-c1943e18dc86.png It suggests that you don't have enough RAM to build the ensemble. You can try specifying a simple final_estimator, e.g., ```python automl.fit( X_train, y_train, task="classification",...
> Now training is completed but RAM error is occurred as below- > >  > > It seems ensembling is not possible with FLAML. How large is the dataset...
> > > Now training is completed but RAM error is occurred as below- > > >  > > > It seems ensembling is not possible with FLAML. >...
> Yes, I started the experiment with `3600` but then ensembling was covering 96% so I increased the time budget. And I am feeding raw data into the FLAML. Do...
> I mean with a lower time budget, search was 96% and in log message it was saying to increase the time budget to complete the search as 100%. Did...
If you have enough RAM now, try removing the key "final_estimator" from "ensemble".
Use `log_type="all"`. https://microsoft.github.io/FLAML/docs/Use-Cases/Task-Oriented-AutoML#log-the-trials
> @sonichi I tried adding more estimators with the ensemble as below- > >  > > And my score improved to 83.968 from 83.55. Is there anything I can...
> It can be reproduced with any dataset, and then pickling the transformer object (feature_transformer) > > ``` > hyperparams, estimator_class, X_transformed, y_transformed, feature_transformer, label_transformer = preprocess_and_suggest_hyperparams( > "classification", X,...