botorch icon indicating copy to clipboard operation
botorch copied to clipboard

[Feature Request] can ` optimize_acqf_discrete_local_search` accept more kinds of constraints?

Open Leon924 opened this issue 2 years ago • 6 comments

🚀 Feature Request

My design space is fully discrete(every parameter support few levels/integer values) and extremely huge so that can not be enumerated ahead and provide to optimize_acqf_discrete. I am newer in botorch. After checking all API reference in official doc, maybe optimize_acqf-discrete_local_search is the newest func in tackling with discrete design space? But now I ran some constraints problems, beacause I need to set equality and inequality, and nonlinear constraints in my input parameters. let's say, x is a 20-dim parameter configuration in the whole design space, I have three kinds of constraints to make it valid:

  • x_1 = 2*x_2
    
  • x_3 >= x_4
    
  • x_5%x_6 = 0 (dividable)
    

So can optimize_acqf-discrete_local_search tackle with these constraints? I did not see the arguments of "none_linear_constraints" and "equality_constraints" in the API though. Anyone could give me some suggestion?

Leon924 avatar Jul 04 '22 14:07 Leon924

https://github.com/pytorch/botorch/issues/852

Leon924 avatar Jul 04 '22 14:07 Leon924

These kinds of constraints are currently not supported out of the box. If you have a very high cardinality discrete space that is constrained by both equality and nonlinear inequality constraints then you've got yourself a formidably hard problem on your hands and this will be very challenging to support in a generic fashion.

Basically, the way optimize_acqf_discrete_local_search works is by doing rejection sampling. You could extend this pattern and allow passing in nonlinear inequality constraints as well. For that you'd have to modify the following: https://github.com/pytorch/botorch/blob/7ce7c6d9d36c0eeefd9e15bdd0355d41e16f575d/botorch/optim/optimize.py#L669-L676 to filter out configurations that violate the constraints.

Depending on how constrained your space is this might be ok (relatively large parts of the product space are feasible) or fail miserable (only small parts are feasible and you keep sampling and rejecting). Doing this in a more scalable fashion would require more thought and probably also require a more problem-specific approach.

Balandat avatar Jul 04 '22 20:07 Balandat

Thanks a lot for your quick response and detail instructions! when I try to add some constraints,including inequality constraints, equality constraints, and nonlinear equality constraints into _filter_infeasible method, in maximizing acqf stage, it run into the following errors:

Traceback (most recent call last): File "/export1/Workspace/liqiang/tool/pycharm/pycharm-community-2019.3.5/plugins/python-ce/helpers/pydev/pydevd.py", line 1434, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "/export1/Workspace/liqiang/tool/pycharm/pycharm-community-2019.3.5/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/export1/Workspace/liqiang/socgen/micropt/coreOptimizer/bo/nt/nt-random.py", line 210, in X_new, X_new_real, acq_func = Exp_random.generate_next_point(n_candidates=n_candicate) File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/nextorch/bo.py", line 2068, in generate_next_point k=n_candidates) File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/nextorch/bo.py", line 832, in get_top_k_candidates sequential=True) File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/botorch/optim/optimize.py", line 799, in optimize_acqf_discrete_local_search nonlinear_equality_constraints=nonlinear_equality_constraints, File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/botorch/optim/optimize.py", line 693, in _filter_infeasible_new for (inds, func) in nonlinear_equality_constraints: TypeError: cannot unpack non-iterable function object

Process finished with exit code 1

So that means function call of nonlinear_equality_constraints cannot be reached ? How can I solve this? could you please give me some tips? I do in the same way as optimize_acqf does. It also can aceept non-linear-constraints.

Leon924 avatar Jul 07 '22 01:07 Leon924

Hi, Max:

Because I cannot fix TypeError: cannot unpack non-iterable function object error, So now I have enumerate whole design space points and use optimize_acqf_discrete API to find next experiment design point, but after I feed the whole design space, about 35 millions choices, to choices argument of which. It has been running one and half day. Perhaps the design space is so big for evaluating each qEHVI value transversely? Inside optimize_acqf_discrete, it will use acqf function to traverse every point provided from choices argument, am I right? please correct me if I was wrong. So maybe it's not a good idea to leverage optimize_acqf_discrete for huge discrete design space. Could you please give me some suggestion on this situation?

On the another hand, although my design space is fully discrete, but I tried to do it in realx-and-round approach, then tried to use optimize_acqf and feed equality constraints, inequality constraints and callable non-linear constraints into it, But I ran into below error.

X_new, acq_value = optimize_acqf(acq_func, bounds=bounds, q=k, num_restarts=10, return_best_only=return_best_only, equality_constraints=eq_cons, inequality_constraints=ineq_cons, nonlinear_inequality_constraints=non_linear, batch_initial_conditions=batch_ics, sequential=True)

File "/export1/Workspace/liqiang/socgen/micropt/coreOptimizer/bo/nt/nt-random.py", line 213, in X_new, X_new_real, acq_func = Exp_random.generate_next_point(n_candidates=n_candicate) File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/nextorch/bo.py", line 2167, in generate_next_point k=n_candidates) File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/nextorch/bo.py", line 912, in get_top_k_candidates sequential=True) File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/botorch/optim/optimize.py", line 161, in optimize_acqf sequential=False, File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/botorch/optim/optimize.py", line 235, in optimize_acqf fixed_features=fixed_features, File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/botorch/generation/gen.py", line 211, in gen_candidates_scipy options={k: v for k, v in options.items() if k not in ["method", "callback"]}, File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/scipy/optimize/_minimize.py", line 632, in minimize constraints, callback=callback, **options) File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/scipy/optimize/slsqp.py", line 331, in _minimize_slsqp for c in cons['ineq']])) File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/scipy/optimize/slsqp.py", line 331, in for c in cons['ineq']])) File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/botorch/optim/parameter_constraints.py", line 359, in f_obj cache["obj"], cache["grad"] = f_obj_and_grad(X) File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/botorch/optim/parameter_constraints.py", line 350, in f_obj_and_grad obj, grad = f_np_wrapper(x, f=nlc) File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/botorch/generation/gen.py", line 173, in f_np_wrapper gradf = _arrayify(torch.autograd.grad(loss, X)[0].contiguous().view(-1)) File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/torch/autograd/init.py", line 277, in grad allow_unused, accumulate_grad=False) # Calls into the C++ engine to run the backward pass RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn Process finished with exit code 1

I thought of this is caused by which the callable function( non-linear constraints) has no grad information, not very familiar with this part. Can I cancel feeding this non-linear constraints into it, instead, to do post-processing, I mean use non-linear constraints to filter the new points returned from optimize_acqf ? I don't know whether it will influence the optimization validity of BO. Could you please give me some advices ?

best regards, leon

Leon924 avatar Jul 09 '22 06:07 Leon924

code:

X_new, acq_value = optimize_acqf(acq_func, bounds=bounds, q=k, num_restarts=16, raw_samples=512, return_best_only=return_best_only, equality_constraints=eq_cons, inequality_constraints=ineq_cons, sequential=True)

trace:

Trace back (most recent call last): File "/export1/Workspace/living/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/botorch/optim/initializers.py", line 184, in gen_batch_initial_conditions X_rnd[start_idx:end_idx].to(device=device) File "/export1/Workspace/living/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/export1/Workspace/living/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/botorch/utils/transforms.py", line 301, in decorated return method(cls, X, **kwargs) File "/export1/Workspace/living/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/botorch/utils/transforms.py", line 258, in decorated output = method(acqf, X, *args, **kwargs) File "/export1/Workspace/living/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/botorch/acquisition/multi_objective/monte_carlo.py", line 338, in forward return self._compute_qehvi(samples=samples, X=X) File "/export1/Workspace/living/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/botorch/acquisition/multi_objective/monte_carlo.py", line 299, in _compute_qehvi obj_subsets = obj.index_select(dim=-2, index=q_choose_i.view(-1)) RuntimeError: [enforce fail at alloc_cpu.cpp:73] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 344126914560 bytes. Error code 12 (Cannot allocate memory) Process finished with exit code 1

library and system version

  • BoTorch Version 0.6.4
  • GPyTorch Version 1.6.0
  • PyTorch Version 1.11.0+cpu
  • Computer OS :RHEL 6.10

Noteworthy variables: data dim: 19 raw_samples = 512 num_restart = 16

The size of the memory the program is asking to allocate is just too big. I a m wondering that which factor is causing this error, is it because the design space is too large, or should I change some of the default parameters?

Leon924 avatar Jul 11 '22 07:07 Leon924

So maybe it's not a good idea to leverage optimize_acqf_discrete for huge discrete design space. Could you please give me some suggestion on this situation?

Yes, if you have a discrete space of massive cardinality, evaluating all combinations on an acquisition function that is relatively costly to compute (such as EHVI) may not be feasible. We have been working on some methods that employ a probabilistic reparameterization of the discrete space which has shown promising results in settings like this one: https://realworldml.github.io/files/cr/paper22.pdf (code here: https://github.com/facebookresearch/bo_pr). @sdaulton is planning to upstream this work in to BoTorch as well (not sure about the precise timeline for that).

File "/export1/Workspace/liqiang/tool/anaconda3/envs/socgen-nextorch/lib/python3.7/site-packages/botorch/optim/optimize.py", line 693, in _filter_infeasible_new for (inds, func) in nonlinear_equality_constraints: TypeError: cannot unpack non-iterable function object So that means function call of nonlinear_equality_constraints cannot be reached ? How can I solve this? could you please give me some tips? I do in the same way as optimize_acqf does. It also can aceept non-linear-constraints.

What exactly does your implementation look like? It seems like you're passing in a list of functions but then are trying to unpack that into an (indices, function) tuple. Something seems to be wrong with your code here.

The size of the memory the program is asking to allocate is just too big. I am wondering that which factor is causing this error, is it because the design space is too large, or should I change some of the default parameters?

It's hard to say without trying to run the code that you have (not sure how the inequality constraints are handled exactly). If there is an enumeration of discrete options then it will be problematic to just do all of that in batch mode and instead some chunking may be necessary to avoid memory blowups.

Balandat avatar Jul 31 '22 19:07 Balandat