Jonathan Mackenzie
Jonathan Mackenzie
I also had this issue, after downgrading torchmetrics to `0.6.0` (see https://github.com/NVIDIA/DeepLearningExamples/issues/1113) and applying the patch from #4 I get an `ImportError`: ``` ImportError: cannot import name 'CLIPTokenizer' from 'transformers'...
@bltavares Your fixed worked for me. Not sure if there will ramifications for any other css though.
Have you tried installing keras? ``` pip install keras ```
The simplest path to getting this to work would be to use the GPU identifier as a custom hyperparameter that always returns the next value in a list using `itertools.cycle(GPU_IDS)`....
My fix for this error was to simply move my parameters into an assigned variable. So replace code like this: ```python args = { 'x': {{uniform(0,1)}} } ``` With this:...
This looks like you're having an issue with sklearn, please ask on stackoverflow with formatted code, full traceback and the problem reduced to a MWE https://stackoverflow.com/help/minimal-reproducible-example
Have you defined or imported the function anywhere in your code?
If your methods are so short, consider just putting them inside your main function that you want to optimise. Additionally, no such function `train_predict` exists inside `sklearn.model_selection`. Also consider formatting...
@bennyooo Iterating over the trials object gives you the information you want. It has the results `dict` from each trial and a bunch of extra trial information like what parameters...
Can you just call `setup_multi_gpus()` inside your `create_model` function? You can pass the keep_temp argument to `optim.minimize` and examine the python file produced to see what variables are being created.