botorch
botorch copied to clipboard
probabilistic reparameterization
Summary: Probabilistic reparameterization
Differential Revision: D41629217
This pull request was exported from Phabricator. Differential Revision: D41629217
This pull request was exported from Phabricator. Differential Revision: D41629217
This pull request was exported from Phabricator. Differential Revision: D41629217
Codecov Report
Merging #1533 (8be7d28) into main (63dd0cd) will decrease coverage by
2.71%. The diff coverage is12.71%.
:exclamation: Current head 8be7d28 differs from pull request most recent head 7ce1389. Consider uploading reports for the commit 7ce1389 to get more accurate results
@@ Coverage Diff @@
## main #1533 +/- ##
===========================================
- Coverage 100.00% 97.29% -2.71%
===========================================
Files 169 171 +2
Lines 14518 14949 +431
===========================================
+ Hits 14518 14544 +26
- Misses 0 405 +405
| Impacted Files | Coverage Δ | |
|---|---|---|
| ...ch/acquisition/probabilistic_reparameterization.py | 0.00% <0.00%> (ø) |
|
| botorch/models/transforms/input.py | 67.79% <5.42%> (-32.21%) |
:arrow_down: |
| botorch/models/transforms/factory.py | 71.79% <8.33%> (-28.21%) |
:arrow_down: |
| botorch/acquisition/fixed_feature.py | 100.00% <100.00%> (ø) |
|
| botorch/acquisition/penalized.py | 100.00% <100.00%> (ø) |
|
| botorch/acquisition/proximal.py | 100.00% <100.00%> (ø) |
|
| botorch/acquisition/utils.py | 100.00% <100.00%> (ø) |
|
| botorch/acquisition/wrapper.py | 100.00% <100.00%> (ø) |
:mega: We’re building smart automated test selection to slash your CI/CD build times. Learn more
Hello @sdaulton. Thank you for providing the notebook. However it does not seem to work for me (I'm using commit e313e4c). I get an error in the BO loop at the line
new_x_pr, new_obj_pr = optimize_acqf_pr_and_get_observation(
ei_pr, analytic=False
)
Which gives IndexError: index 0 is out of bounds for dimension 0 with size 0. This error comes from line 351 of botorch/acquisition/probabilistic_reparameterization.py. I set a flag in my debugger for this line and find unnormalized_X has shape (1, 17) but both self.integer_indices and self.integer_bounds are an empty torch.tensor([]) which is what causes this index error.
I tried adding the line integer_indices=list(categorical_features.keys()) in optimize_acqf_pr_and_get_observation so pr_acq_func is given by
pr_acq_func = MCProbabilisticReparameterization(
acq_function=acq_func,
one_hot_bounds=one_hot_bounds,
categorical_features=categorical_features,
integer_indices = list(categorical_features.keys()), # new line here
batch_limit=128,
mc_samples=4 if SMOKE_TEST else 128,
)
This seems to get the code to progress but then a different error is triggered on line 366 of botorch/acquisition/probabilistic_reparameterization.py. The error is RuntimeError: The expanded size of the tensor (1) must match the existing size (3) at non-singleton dimension 1. Target sizes: [1, 1]. Tensor sizes: [3].
Am I using the correct commit? You can run the notebook without errors?
This pull request was exported from Phabricator. Differential Revision: D41629217
@hkenlay Sorry about that. I forgot to include the changes in there for categoricals. Let me know if you still have issues on 7ce1389
Works great, and really exciting work, thanks @sdaulton.
Hi @sdaulton!
Thank you for your pull request.
We require contributors to sign our Contributor License Agreement, and yours needs attention.
You currently have a record in our system, but the CLA is no longer valid, and will need to be resubmitted.
Process
In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.
Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.
If you have received this in error or have any questions, please contact us at [email protected]. Thanks!
Hello @sdaulton and @Balandat, thanks for this very exciting piece of work! Is there any timeline when this PR might be merged? :)
Not at the moment. It's been fairly low priority. If this is of interest, I can work on prioritizing getting it ready
Not at the moment. It's been fairly low priority. If this is of interest, I can work on prioritizing getting it ready
Thanks @sdaulton. I recently went through the paper, fantastic piece of work! I'd be very interested to try it out with botorch, especially to check out in practice how the new approach results in a speed-up over the brute-force combinatorical approach for mixed search spaces where the discrete parameter space contains several dimensions and levels.
@hkenlay Sorry about that. I forgot to include the changes in there for categoricals. Let me know if you still have issues on 7ce1389
I recently read your interesting work. @sdaulton I try to use it in parallel chemical reaction optimization. In the notebook you provide I changed q=1 to q=2 in function optimize_acqf_pr_and_get_observation:
candidates, _ = optimize_acqf(
acq_function=pr_acq_func,
bounds=standard_bounds,
q=2,
num_restarts=NUM_RESTARTS,
raw_samples=RAW_SAMPLES, # used for intialization heuristic
options={
"batch_limit": 5,
"maxiter": 200,
"rel_tol": float("-inf"), # run for a full 200 steps
},
# use Adam for Monte Carlo PR
gen_candidates=gen_candidates_torch if not analytic else gen_candidates_scipy,
)
which raises error: AssertionError: Expected X to be `batch_shape x q=1 x d`, but got X with shape torch.Size([1024, 128, 2, 17]).
Did I make some mistake? Do you have the case whose batch size > 1 and space is mixed?
Thanks!
Hi @Ruan-Yixiang, We didn't test q>1, so it is quite possible that this PR needs updating to support q>1. The MC formulation of PR could get pretty expensive if one is also using a MC acquisition function too (e.g. for q>1). It would be interesting to test out PR in the batch setting and determine ways of speeding things up.
thanks for your apply! @sdaulton I may continue trying it. By the way, I tested the wall time of MC PR. It is strange that gen_candidates_torch in 4090 GPU is much slower (about 3 times) than gen_candidates_scipy in i9-13900k CPU. Do you know why? And have you compared the wall time of PR to Botorch's optimize_acqf_mixed which is achieved by enumerating combinations of categorical variables?
@sdaulton Hi, may I ask if this PR is in progress? :) Again, many thanks for your efforts!
@Ruan-Yixiang
which raises error: AssertionError: Expected X to be
batch_shape x q=1 x d, but got X with shape torch.Size([1024, 128, 2, 17]).
looks like that error is due to using an analytic acquisition function with q > 1. When q > 1, you need to use an MCAcquisitionFunction instead. Replacing ExpectedImprovement with qExpectedImprovement should resolve the error
but like @sdaulton mentioned, it seems like optimization doesn't work well with q > 1 on my end, it’s something to be mindful of when using it.
Hello! Is there possibly a timeline for this feature to be available in a future release?
Hi @dsanyal,
There a quite a few merge conflicts that need to be resolved and tests that need to be written. This is relatively low priority at the moment, but if anyone from the community wants to clean this up that would be appreciated. Otherwise, I'll try to find time to clean this up when time permits
Hi @sdaulton,
I'd be keen to clean this up - it's a very cool piece of work, and I'd love to help get this into BoTorch! I've already gone through and resolved all of the merge conflicts, and I'm happy to write some tests. If you had any thoughts about a testing strategy let me know, otherwise I will just try to follow the other tests in test/acquisition.
Should I make a new PR, or would it be best to keep the discussion to this PR?
Also, I see there's some discussion about the case q>1. I personally think extending to batch should be left to a separate PR, but I'm also happy to have a look and see if I can resolve this issue as well.
Thanks!
(https://github.com/TobyBoyne/botorch/tree/prob-reparam)
Hi @TobyBoyne, thanks a lot for picking this up! Great to see that you've already made some progress on this.
While I think it would make sense to keep the discussion on this PR the logistics are probably going to be easier if you open up a new one - you can just point back to this one in the new PR so people can go back to the discussion here.
I agree that the case q>1 is fine to handle in a separate PR.
@sdaulton you have a better sense of the outstanding TODOs here as well as the questions about the testing strategy - any input from your end?