[GENERAL SUPPORT]: Handling Experimental Constraints in MOBO
Question
Hello, I’m running high-throughput multi-objective experiments with 10 parameters in total. Due to experimental constraints, each batch can only change 2 parameters, while the others must remain fixed.
After each batch, I update the Bayesian optimization model and may choose a different pair of parameters to optimize next. However, I don’t know in advance how many batches will be required — I can only update the model iteratively after each round.
Could this workflow be supported by the existing optimization framework? Any suggestions or references would be greatly appreciated.
Please provide any relevant code snippet if applicable.
Code of Conduct
- [x] I agree to follow this Ax's Code of Conduct
Hello there! If I understand correctly your setup is as follows:
- You have a search space of 10 parameters [p1, p2, ..., p10]
- You can only update 2 parameters at a time, while the rest must remain fixed to their starting values
If that's correct, this is something you can achieve using the fixed_parameters argument of get_next_trials. You can fix the 8 parameters you don't want to optimize during that iteration, and let Ax determine the best values to test for the 2 parameters that are being optimized that round.