HpBandSter
HpBandSter copied to clipboard
a distributed Hyperband implementation on Steroids
Hi! The "How to extend HpBandSter with your optimizer" page at [this](https://automl.github.io/HpBandSter/build/html/optimizers/how_to_extend.html) URL seems to be unavailable. Is there a guideline / blueprint to implement a new optimizer? Thanks!
The `loss_fn` lambda was not being used and as a result, only the losses and not any info can be retrieved, for e.g by saying `loss_fn = lambda r: {"val_NLL":...
Hi. My name is YJ Hong and I am trying to use BOHB for my project. I have few questions about how BOHB operates (especially in algorithm 2 in paper)...
#81 This pull requests adds support for running of server and workers behind a NAT router.
The download buttons for the examples on [readthedocs](https://automl.github.io/HpBandSter/build/html/auto_examples/index.html) are not working. I cloned the repo and built the docs locally and it works fine. I guess just something in the...
During the execution of the BOHB program, the following error occurred, causing the program to hang,I want to know why: "**DEBUG:hpbandster:DISPATCHER: Starting worker discovery\n", "DEBUG:hpbandster:DISPATCHER: Found 1 potential workers, 1...
I am interested in using HpBandSter in a distributed fashion, using [Horovod](https://github.com/horovod/horovod). It essentially performs efficient communication between GPUs primarily for data parallelism, i.e. training a single model with mini-batches...
Hello, I'm looking to retrieve the EI (or the TPE) of each parameter after training but so far I'm struggling to have consistent results. Each time I reload the same...
Hello, I just tried out your nice package for hyperparameter optimization and it works well. I want to understand how many configs are sampled and with which budget they are...
I've been experimenting with warm-starting BOHB with results from several previous runs. To my surprise, after inspecting warm-started optimizer I found it contains only a model for one budget, and...