Shuhei Watanabe
Shuhei Watanabe
Just in case, I separated my TPE implementation [here](https://github.com/nabenabe0928/tpe).
Just to clarify, we just recommend users to use SMAC but not HPBandster as Freiburg team doesn't have any active maintainers for BOHB and we are not saying BOHB is...
It is actually the one we caused in the last few PRs. So we need to inspect the reason from preprocessors.
Check - test/test_pipeline/components/preprocessing/test_feature_preprocessor.py::TestFeaturePreprocessors::test_pipeline_fit_include[Nystroem-classification_numerical_and_categorical] - test/test_api/test_base_api.py::test_pipeline_get_budget[3-50-runtime-expected1-classification_categorical_only]
Check - test/test_api/test_base_api.py::test_pipeline_get_budget[3-50-runtime-expected1-classification_categorical_only] - test/test_pipeline/components/preprocessing/test_feature_preprocessor.py::TestFeaturePreprocessors::test_pipeline_fit_include[Nystroem-classification_numerical_and_categorical]
Check: - test/test_pipeline/test_tabular_regression.py::TestTabularRegression::test_pipeline_predict[regression_numerical_only] - test/test_pipeline/components/preprocessing/test_feature_preprocessor.py::TestFeaturePreprocessors::test_pipeline_fit_include[Nystroem-classification_numerical_and_categorical]
FYI, when we use [Optuna](https://github.com/optuna/optuna) with a tiny model, we consume around only 150MB. This module is also thread safe. ``` import optuna def objective(trial): x0 = trial.suggest_uniform('x0', -10, 10)...
I tested the memory usage for the following datasets: | Dataset name | # of features | # of instances | Approx. Datasize [MB] | |-| :-:|:-:|:-:| | Covertype |...
This is from #259 by @franchuterivera. - [x] We should not let the datamanager actively reside in memory when we are not using it. For example, there is no need...
Check if we can use `generator` instead of `np.ndarray`