[Bug] `batch_initial_conditions` shouldn't have to satisfy `nonlinear_inequality_constraints`
🐛 Bug
When using nonlinear_inequality_constraints in optimize_acqf, you need to set batch_initial_conditions, and these ICs need to respect the constraints. This seems unnecessary - SLSQP is capable of starting from an infeasible IC. If the only reason batch_initial_conditions needs to be set is so that the user is forced to provide a feasible IC, then this requirement can be relaxed too.
I imagine the issue can also be solved by using DeterministicModel with an outcome constraint, but this does not work with analytic acquisition functions.
https://github.com/pytorch/botorch/blob/92d73e41220316235772d1783b77f8aea52706ea/botorch/optim/parameter_constraints.py#L593-L596
To reproduce
** Code snippet to reproduce **
import torch
from botorch.acquisition import UpperConfidenceBound
from botorch.fit import fit_gpytorch_mll
from botorch.models import SingleTaskGP
from botorch.optim import optimize_acqf
from gpytorch.mlls import ExactMarginalLogLikelihood
def objective(x):
return (x[..., 0] - 0.5) ** 2 + x[..., 0]
def constraint(x):
return (x[..., 0] - 0.5) ** 2 * 50 - 2
n_train = 64
device = torch.device("cpu")
train_x = torch.rand(n_train, 1, dtype=torch.float64, device=device)
train_y = objective(train_x)
con_y = constraint(train_x)
bounds = torch.vstack([torch.zeros(1, 1), torch.ones(1, 1)])
model = SingleTaskGP(
train_x,
train_y[:, None],
)
mll = ExactMarginalLogLikelihood(model.likelihood, model)
_ = fit_gpytorch_mll(mll)
acqf = UpperConfidenceBound(model, beta=4)
initial_condition = 0.33
candidates, value = optimize_acqf(
acqf,
bounds,
q=1,
num_restarts=1,
raw_samples=1,
nonlinear_inequality_constraints=[
(lambda x: -constraint(x), True),
],
batch_initial_conditions=torch.tensor([[[initial_condition]]]),
)
** Stack trace/error message **
ValueError: `batch_initial_conditions` must satisfy the non-linear inequality constraints.
Expected Behavior
If the exception is commented out, the same candidate is found regardless of whether initial_condition is feasible or infeasible, demonstrating that in this case the exception is preventing use cases where it is hard to find a feasible region and you want the optimiser to find it for you.
System information
Please complete the following information:
- BoTorch Version 0.12.0
- GPyTorch Version 1.13
- PyTorch Version 2.5.1+cu124
- Computer OS: Linux
Additional context
NA
cc @dme65 who introduced this check, but I believe it was in a context where we could not simply use SLSQP so making sure that the ICs satisfied the constraints was necessary. I guess we could potentially make this a warning in cases when we use optimizers that can handle infeasible ICs.
I just spotted this explanation in the docstring:
https://github.com/pytorch/botorch/blob/92d73e41220316235772d1783b77f8aea52706ea/botorch/optim/parameter_constraints.py#L568-L569
So the motivation wasn't necessarily to enforce a feasible starting point, it was to ensure the returned candidate is feasible even if the optimiser fails.
In this case, it probably makes sense to raise the warning (or exception) only in the case that the optimiser fails to find a feasible point.
Side-note: it might be confusing that nonlinear_inequality_constraints are feasible when the indicator is positive but outcome_constraint has the opposite convention.
In this case, it probably makes sense to raise the warning (or exception) only in the case that the optimiser fails to find a feasible point.
That makes sense to me.
Side-note: it might be confusing that
nonlinear_inequality_constraintsare feasible when the indicator is positive butoutcome_constrainthas the opposite convention.
Yeah, great point. And we could use better documentation on how and where to define constraints.
Another case where the same issue is coming up: https://github.com/meta-pytorch/botorch/discussions/3074