Add `.predict` function to optimizer
This introduces a user-facing .predict() method to the optimizer to simplify making predictions with the GP. I think that with the support for optimization over non-float parameters, calling .predict() on the GP now requires a bit too much parameter transformation and mangling. Rather than forcing users to handle this complexity themselves, this change provides an interface that handles all necessary conversions internally.
Summary by CodeRabbit
-
New Features
- Added a
predict()method to generate predictions for specified parameters with optional standard deviation or covariance outputs.
- Added a
-
Documentation
- Updated
maximize()documentation to reference the new prediction capability.
- Updated
Codecov Report
:x: Patch coverage is 92.59259% with 2 lines in your changes missing coverage. Please review.
:white_check_mark: Project coverage is 97.77%. Comparing base (c410d51) to head (1e51b28).
| Files with missing lines | Patch % | Lines |
|---|---|---|
| bayes_opt/bayesian_optimization.py | 92.59% | 2 Missing :warning: |
Additional details and impacted files
@@ Coverage Diff @@
## master #593 +/- ##
==========================================
- Coverage 97.89% 97.77% -0.12%
==========================================
Files 10 10
Lines 1185 1212 +27
==========================================
+ Hits 1160 1185 +25
- Misses 25 27 +2
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
:rocket: New features to boost your workflow:
- :snowflake: Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
@coderabbitai full review
β Actions performed
Full review triggered.
Walkthrough
A new predict() method was added to BayesianOptimization for computing predictions on given parameters. It accepts single or multiple parameter inputs, optionally fits the GP, and returns mean with optional uncertainty estimates. Documentation in maximize() was updated. Test utilities including a custom parameter class and triangle area function were added.
Changes
| Cohort / File(s) | Summary |
|---|---|
Core Feature bayes_opt/bayesian_optimization.py |
Added public method predict(params, return_std=False, return_cov=False, fit_gp=True) that accepts single dict or list of dicts, optionally fits GP, normalizes inputs to 2D array, and returns predictions with optional std/covariance. Updated maximize() documentation to reference the new prediction capability. |
Test Utilities tests/test_bayesian_optimization.py |
Added helper class FixedPerimeterTriangleParameter (subclass of BayesParameter) for continuous sampling via Dirichlet distributions with perimeter constraints. Added standalone function area_of_triangle(sides) computing area via Heron's formula. |
Sequence Diagram
sequenceDiagram
participant User
participant BO as BayesianOptimization
participant GP as GaussianProcess
User->>BO: predict(params, fit_gp=True)
alt fit_gp is True
BO->>BO: Check if observations exist
alt No observations
BO-->>User: RuntimeError
else Observations exist
BO->>GP: Fit GP via _fit_gp()
GP-->>BO: GP fitted
end
end
BO->>BO: Normalize params to 2D array
BO->>GP: Predict mean (and std/cov if requested)
GP-->>BO: Predictions returned
BO->>BO: Denormalize output shape
alt return_std or return_cov
BO-->>User: (mean, uncertainty)
else Neither requested
BO-->>User: mean only
end
Estimated code review effort
π― 3 (Moderate) | β±οΈ ~22 minutes
- Input validation & normalization logic: Verify parameter conversion from single dict/list of dicts to 2D array and shape consistency
- GP fitting pathway: Ensure
fit_gpconditional logic correctly delegates to_fit_gp()and handles edge cases (zero observations) - Return shape handling: Validate that output shapes differ correctly between single vs. multiple inputs and with/without uncertainty estimates
- Test coverage: Review test utilities and their integration with the new prediction functionality
Poem
π° A prediction method hops in with grace,
Parameters fed, predictions embrace,
Uncertainty whispered with std or cov,
Triangles tested with Heron's beloved,
The Bayesian warren grows strong, here's the trove! π₯β¨
Pre-merge checks and finishing touches
β Passed checks (3 passed)
| Check name | Status | Explanation |
|---|---|---|
| Description Check | β Passed | Check skipped - CodeRabbitβs high-level summary is enabled. |
| Title check | β Passed | The title directly and accurately describes the main change: adding a new .predict() method to the BayesianOptimization optimizer class. |
| Docstring Coverage | β Passed | No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check. |
β¨ Finishing touches
- [ ] π Generate docstrings
π§ͺ Generate unit tests (beta)
- [ ] Create PR with unit tests
- [ ] Post copyable unit tests in a comment
[!TIP]
π Customizable high-level summaries are now available in beta!
You can now customize how CodeRabbit generates the high-level summary in your pull requests β including its content, structure, tone, and formatting.
- Provide your own instructions using the
high_level_summary_instructionssetting.- Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
- Use
high_level_summary_in_walkthroughto move the summary from the description to the walkthrough section.Example instruction:
"Divide the high-level summary into five sections:
- π Description β Summarize the main change in 50β60 words, explaining what was done.
- π References β List relevant issues, discussions, documentation, or related PRs.
- π¦ Dependencies & Requirements β Mention any new/updated dependencies, environment variable changes, or configuration updates.
- π Contributor Summary β Include a Markdown table showing contributions:
| Contributor | Lines Added | Lines Removed | Files Changed |- βοΈ Additional Notes β Add any extra reviewer context. Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.
Comment @coderabbitai help to get the list of available commands and usage tips.