test-tube icon indicating copy to clipboard operation
test-tube copied to clipboard

Python library to easily log experiments and parallelize hyperparameter search for neural networks

Results 27 test-tube issues
Sort by recently updated
recently updated
newest added

Is it possible to install test_tube without pulling torch ? I am building a docker with tensorflow and I do not want it to get torch. Thanks

I am following the guide to optimize hyperparameters over multiple GPUs: https://towardsdatascience.com/trivial-multi-node-training-with-pytorch-lightning-ff75dfb809bd However, when I run the hyperparam opt, I get the following error: ``` RuntimeError: cuda runtime error (3)...

Currently the hparams are logged as text in tensorboard. Could we change this to start using the add_hparams() function in SummaryWritter. This will allow some additional nice views of the...

I'm getting this error using PyTorch ddp for tensorboard's add_scalars (add scalar works fine). Is there something I can do? -- Process 1 terminated with the following error: Traceback (most...

I'm using `pytorch-lightning` and `test_tube` at the same time. I try to perform hyperparameter search using `optimize_parallel_gpu`, but I see the strange error in the title: `ChildProcessError: [Errno 10] No...

I usually use python fire (https://github.com/google/python-fire), it creates the parser arguments by default looking at the function to be called. Is there any possibility of integrating this into the existing...

Sometimes, there's a chance test-tube will try to create an experiment version which already exists. Need to add a small delay to avoid the race condition.

I'm trying out pytorch-lightning and I'm having an issue after commits 1fad1c7dfb7cc55803cd7a9597723559aa124cbe and 3fba70a22eb5521c12d81302ff978a92c8113909. When I do ``` from test_tube import Experiment exp = Experiment(save_dir=cfg['log_dir']) trainer = Trainer(experiment=exp, max_nb_epochs=1, train_percent_check=0.01)...

What do you think about making the nb_trials param default to None? I've gotten a few confused questions about how many to use when using grid search. @zafarali https://github.com/williamFalcon/test-tube/blob/master/test_tube/argparse_hopt.py#L262

Initial tests should include: - Testing grid search generation - Testing random search generation - CPU parallelization - GPU parallelization - Releasing free GPUs and CPUs continuously so there isn't...

help wanted