Ax icon indicating copy to clipboard operation
Ax copied to clipboard

FullyBayesianMOO memory usage

Open Kh-im opened this issue 6 months ago • 5 comments

Hello,

I use FullyBayesianMOO in Service API with 5 inputs and 4 objectives. After ~40 trials, the memory usage is already at 20 Go and computation time is also very high. Do you know if it's normal and if something can be done to reduce both ?

Thanks,

Kh-im avatar Dec 15 '23 07:12 Kh-im

You can use approximate HV, which will be a lot faster but I highly recommend not optimizing so many objectives since the surface area for 4 dimensions is quite large and it’ll require a huge budget to get a result that isn’t going to be much better than what you can infer via random search. Is it possible to instead set constraints or explore some marginal tradeoffs between the two metrics? Another alternative is to use preference exploration or some scalarization to target a particular desired tradeoff.

Best E

On Fri, Dec 15, 2023 at 1:44 AM Kh-im @.***> wrote:

Hello,

I use FullyBayesianMOO in Service API with 5 inputs and 4 objectives. After a few tests (~40), the memory usage is already at 20 Go and computation time is also very high. Do you know if it's normal and if something can be done to reduce both ?

Thanks,

— Reply to this email directly, view it on GitHub https://github.com/facebook/Ax/issues/2085, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAW34PJSK2LFUBFT2V56Y3YJP5W5AVCNFSM6AAAAABAWBHK2CVHI2DSMVQWIX3LMV43ASLTON2WKOZSGA2DGMBXGM4DANQ . You are receiving this because you are subscribed to this thread.Message ID: @.***>

eytan avatar Dec 15 '23 16:12 eytan

I echo what @eytan mentioned about reducing the number of objectives.

To speed things up, you can reduce the number of MC samples used to approximate the expectation over the GP posterior in the acquisition function. You can probably even drop this down to 1 without severe performance degradation.

You can do this by setting model_gen_kwargs in the GenerationStep (for FullyBayesianMOO) as:

model_gen_kwargs={
  "model_gen_options": {
  “acquisition_function_kwargs”: {
    # set this however you’d like
    “mc_samples”: 1
    }
  }
}

In addition to “mc_samples”, you can also pass “alpha” to use approximate HV (as @eytan mentioned)---but I would recommend starting with reducing the number of mc_samples, not alpha. Lastly, you can fall back on random scalarizations (qNParEGO) if HV proves to be too slow by passing “random_scalarization”: True in the same way, but this problem should be able to be solved with HV-based methods and you should get better sample efficiency.

sdaulton avatar Dec 15 '23 17:12 sdaulton

I am also having this issue not only fullbayes but also single b.o!

NoobIamNoob avatar Dec 16 '23 17:12 NoobIamNoob

Thanks for you answer, I understand.

I can write it with two objectives and two outcome constraints. Is this ok ?

Can I find a research paper studying different solutions when HV is needed with more than two (3->6) objectives ?

Thanks again for your help

Kh-im avatar Dec 17 '23 08:12 Kh-im

Appendix F.4 of https://arxiv.org/pdf/2006.05078.pdf includes some discussion. Random scalarizations are common for >5 objectives. Approximate box decompositions can be helpful for 4-5 objectives

sdaulton avatar Dec 18 '23 18:12 sdaulton