syne-tune
syne-tune copied to clipboard
Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
*Issue #, if available:* *Description of changes:* By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.
There is no equivalent of choice for numeric values. E.g., in the FCNet blackbox the learning rate is defined as `'hp_init_lr': choice([0.0005, 0.001, 0.005, 0.01, 0.05, 0.1])`. This will not...
There seems to be a problem with the Hyperband promotion logic. How to reproduce: Add `type="promotion"` to https://github.com/awslabs/syne-tune/blob/main/benchmarking/nursery/benchmark_automl/baselines.py#L69 Run `python benchmarking/nursery/benchmark_automl/benchmark_main.py --num_seeds 1 --method ASHA --benchmark lcbench-airlines` ```Traceback (most recent...
*Description of changes:* First draft for including [YAHPO Gym](https://github.com/slds-lmu/yahpo_gym/tree/main/yahpo_gym) as a `BlackBoxRecipe`. This is not entirely straightforward and I might need some input from @geoalgo on how to progress. Currently,...
No unit test is covering wait_trial_completion_when_stopping=True. I need to get back to this at some time.
It would be useful to have an example showing how to retrieve the trained model providing the best performance.
Methods that will use top k candidates will therefore use out of config space configs which will raise in error. Restrict the feasible candidates for this function to everything within...
(Apologies for creating multiple recent GitHub issues, this is the last one, I promise!) I took the DataFrame from my experiment results and used Plotly's `plotly.express.parallel_categories` plot to visualize hyperparameter...
*Description of changes:* Adding a `force_download` argument for `syne_tune.experiments.load_experiment(...)` function that allows to redownload the results of the experiment if they were change since the last time this function was...
Hi, I have a limit of 8 `ml.g5.12xlarge` instances, and although I set `Tuner.n_workers = 5` I still got a `ResourceLimitExceeded` error. Is there a way to make sure that...