hyperopt icon indicating copy to clipboard operation
hyperopt copied to clipboard

TypeError: ap_loguniform_sampler() got multiple values for argument 'size'

Open sunny1072 opened this issue 7 years ago • 6 comments


TypeError Traceback (most recent call last) in () ----> 1 hyperopt_opt_hp = optimize(trials, hyperopt_hp_grid)

in optimize(trials, space) 1 trials = Trials() 2 def optimize(trials, space): ----> 3 best = fmin(fn=loss, space=space, algo=tpe.suggest, max_evals=MAX_EVALS, trials=trials) 4 return best

/home/sunny/anaconda3/lib/python3.6/site-packages/hyperopt/fmin.py in fmin(fn, space, algo, max_evals, trials, rstate, allow_trials_fmin, pass_expr_memo_ctrl, catch_eval_exceptions, verbose, return_argmin) 305 verbose=verbose, 306 catch_eval_exceptions=catch_eval_exceptions, --> 307 return_argmin=return_argmin, 308 ) 309

/home/sunny/anaconda3/lib/python3.6/site-packages/hyperopt/base.py in fmin(self, fn, space, algo, max_evals, rstate, verbose, pass_expr_memo_ctrl, catch_eval_exceptions, return_argmin) 633 pass_expr_memo_ctrl=pass_expr_memo_ctrl, 634 catch_eval_exceptions=catch_eval_exceptions, --> 635 return_argmin=return_argmin) 636 637

/home/sunny/anaconda3/lib/python3.6/site-packages/hyperopt/fmin.py in fmin(fn, space, algo, max_evals, trials, rstate, allow_trials_fmin, pass_expr_memo_ctrl, catch_eval_exceptions, verbose, return_argmin) 318 verbose=verbose) 319 rval.catch_eval_exceptions = catch_eval_exceptions --> 320 rval.exhaust() 321 if return_argmin: 322 return trials.argmin

/home/sunny/anaconda3/lib/python3.6/site-packages/hyperopt/fmin.py in exhaust(self) 197 def exhaust(self): 198 n_done = len(self.trials) --> 199 self.run(self.max_evals - n_done, block_until_done=self.async) 200 self.trials.refresh() 201 return self

/home/sunny/anaconda3/lib/python3.6/site-packages/hyperopt/fmin.py in run(self, N, block_until_done) 155 d['result'].get('status'))) 156 new_trials = algo(new_ids, self.domain, trials, --> 157 self.rstate.randint(2 ** 31 - 1)) 158 assert len(new_ids) >= len(new_trials) 159 if len(new_trials):

/home/sunny/anaconda3/lib/python3.6/site-packages/hyperopt/tpe.py in suggest(new_ids, domain, trials, seed, prior_weight, n_startup_jobs, n_EI_candidates, gamma, linear_forgetting) 810 t0 = time.time() 811 (s_prior_weight, observed, observed_loss, specs, opt_idxs, opt_vals)
--> 812 = tpe_transform(domain, prior_weight, gamma) 813 tt = time.time() - t0 814 logger.info('tpe_transform took %f seconds' % tt)

/home/sunny/anaconda3/lib/python3.6/site-packages/hyperopt/tpe.py in tpe_transform(domain, prior_weight, gamma) 791 observed_loss['vals'], 792 pyll.Literal(gamma), --> 793 s_prior_weight 794 ) 795

/home/sunny/anaconda3/lib/python3.6/site-packages/hyperopt/tpe.py in build_posterior(specs, prior_idxs, prior_vals, obs_idxs, obs_vals, oloss_idxs, oloss_vals, oloss_gamma, prior_weight) 682 named_args = [[kw, memo[arg]] 683 for (kw, arg) in node.named_args] --> 684 b_post = fn(*b_args, **dict(named_args)) 685 a_args = [obs_above, prior_weight] + aa 686 a_post = fn(*a_args, **dict(named_args))

TypeError: ap_loguniform_sampler() got multiple values for argument 'size'

sunny1072 avatar Jul 12 '17 18:07 sunny1072

I got a similar error:

Traceback (most recent call last):
  File "optimiseNonResonantModel.py", line 169, in <module>
    best_run = fmin(objective, space=space, algo=tpe.suggest, max_evals=2, trials=trials, verbose=True)
  File "/usr/lib/python2.7/site-packages/hyperopt/fmin.py", line 307, in fmin
    return_argmin=return_argmin,
  File "/usr/lib/python2.7/site-packages/hyperopt/base.py", line 635, in fmin
    return_argmin=return_argmin)
  File "/usr/lib/python2.7/site-packages/hyperopt/fmin.py", line 320, in fmin
    rval.exhaust()
  File "/usr/lib/python2.7/site-packages/hyperopt/fmin.py", line 199, in exhaust
    self.run(self.max_evals - n_done, block_until_done=self.async)
  File "/usr/lib/python2.7/site-packages/hyperopt/fmin.py", line 157, in run
    self.rstate.randint(2 ** 31 - 1))
  File "/usr/lib/python2.7/site-packages/hyperopt/tpe.py", line 812, in suggest
    = tpe_transform(domain, prior_weight, gamma)
  File "/usr/lib/python2.7/site-packages/hyperopt/tpe.py", line 793, in tpe_transform
    s_prior_weight
  File "/usr/lib/python2.7/site-packages/hyperopt/tpe.py", line 684, in build_posterior
    b_post = fn(*b_args, **dict(named_args))
TypeError: ap_categorical_sampler() got multiple values for keyword argument 'size'

My search space is defined as follows:

space = {
                'model': {
                    'n_neurons': hp.choice('n_neurons', [20, 50, 100]),
                    'n_hidden_layers': hp.randint('n_hidden_layers', 2, 5),
                    'lr': hp.loguniform('lr', np.log(2e-7), np.log(1e-2)),
                    'dropout': hp.uniform('dropout', 0, 1),
                    }
            }

Weirdly, it only works if I remove all the parameters except the "n_neurons" categorical one...

swertz avatar Jul 19 '17 08:07 swertz

Ok sorry for the noise, it was because of the mistaken call to randint...

By the way, on that subject: the docs recommend using quniform for such cases, but quniform returns floats, which cannot be passed directly to functions expecting ints (e.g. number of epochs when training a DNN). Manually converting the variable works, but wouldn't it be easier to have something similar to randint for these casess?

swertz avatar Jul 19 '17 10:07 swertz

@swertz there's no way to set a lower bound for randint?

hmanz avatar Jan 25 '19 16:01 hmanz

@swertz it's not great but more elegant than manually converting, you can do:

from hyperopt.pyll.base import scope
@scope.define
def to_int(x):
    return int(x)

and then

'epochs': scope.to_int(hp.quniform('epochs', 100, 500, 1))

ecorreig avatar Feb 06 '19 19:02 ecorreig

@hmanz i don't understand why they didn't put a lower bound, but you can do it manually by:

'whatever': 20 + hp.randint('whatever', 100),

so that now 'whatever' goes from 20 o 120.

ecorreig avatar Feb 06 '19 19:02 ecorreig

Indeed, that's far from an obvious solution but I guess it'd work, thanks!

Reading the docs again, it would seem randint is not the good candidate for the job, as:

The semantics of this distribution is that there is no more correlation in the loss function between nearby integer values, as compared with more distant integer values. This is an appropriate distribution for describing random seeds for example. If the loss function is probably more correlated for nearby integer values, then you should probably use one of the "quantized" continuous distributions

swertz avatar Feb 06 '19 21:02 swertz