Restore sisyphus integration
Proposed change
reverts #124742 and updates the lib instead
changelog: https://github.com/jkeljo/sisyphus-control/compare/v3.1.3...v3.1.4
release is pending: https://github.com/jkeljo/sisyphus-control/pull/8#issuecomment-2313893689
Type of change
- [x] Dependency upgrade
- [ ] Bugfix (non-breaking change which fixes an issue)
- [ ] New integration (thank you!)
- [ ] New feature (which adds functionality to an existing integration)
- [ ] Deprecation (breaking change to happen in the future)
- [ ] Breaking change (fix/feature causing existing functionality to break)
- [ ] Code quality improvements to existing code or addition of tests
Additional information
- This PR fixes or closes issue: fixes #
- This PR is related to issue:
- Link to documentation pull request:
Checklist
- [ ] The code change is tested and works locally.
- [ ] Local tests pass. Your PR cannot be merged unless tests pass
- [ ] There is no commented out code in this PR.
- [ ] I have followed the development checklist
- [ ] I have followed the perfect PR recommendations
- [ ] The code has been formatted using Ruff (
ruff format homeassistant tests) - [ ] Tests have been added to verify that the new code works.
If user exposed functionality or configuration variables are added/changed:
- [ ] Documentation added/updated for www.home-assistant.io
If the code communicates with devices, web services, or third-party tools:
- [ ] The manifest file has all fields filled out correctly.
Updated and included derived files by running:python3 -m script.hassfest. - [ ] New or updated dependencies have been added to
requirements_all.txt.
Updated by runningpython3 -m script.gen_requirements_all. - [ ] For the updated dependencies - a link to the changelog, or at minimum a diff between library versions is added to the PR description.
To help with the load of incoming pull requests:
- [ ] I have reviewed two other open pull requests in this repository.
Hi, this is my first time contributing. The bug was well described, and I reproduced the issue using the provided testing code.
As already suggested, the problem lies in how _y_train_std is handled when normalize_y=False. Here’s the code in sklearn/gaussian_process/_gpr.py:
if self.normalize_y:
self._y_train_mean = np.mean(y, axis=0)
self._y_train_std = _handle_zeros_in_scale(np.std(y, axis=0), copy=False)
y = (y - self._y_train_mean) / self._y_train_std
else:
shape_y_stats = (y.shape[1],) if y.ndim == 2 else 1
self._y_train_mean = np.zeros(shape=shape_y_stats)
self._y_train_std = np.ones(shape=shape_y_stats)
The code should compute _y_train_std from the training data even when normalize_y=False (as suggested). I applied a quick fix with this change:
# Normalize target value
self._y_train_mean = np.mean(y, axis=0)
self._y_train_std = _handle_zeros_in_scale(np.std(y, axis=0), copy=False)
if self.normalize_y:
# Remove mean and make unit variance
y = (y - self._y_train_mean) / self._y_train_std
else:
self._y_train_mean = np.zeros_like(self._y_train_mean)
After this quick fix, now the test code gives the correct result. I hope this is helpful!
The code I posted fixes the reported bug, running the code to reproduce the error we get that _y_train_std reflects the true scale of the features, even when normalization is disabled.
However, this change causes some test failures because the predict function in GaussianProcessRegressor currently assumes normalization is always applied when undoing normalization during prediction.
I belive that, to fully resolve the issue, the predict function should be modified to conditionally apply the normalization logic based on the normalize_y flag.
I will see if in the next days I manage to do that.
Thank you @Lollitor for your time. I agree with your proposal:
y_mean = self._y_train_std * y_mean + self._y_train_mean
should be replaced by
if self.normalize_y:
y_mean = self._y_train_std * y_mean + self._y_train_mean
I'm also in favor of replacing
# Normalize target value
if self.normalize_y:
self._y_train_mean = np.mean(y, axis=0)
self._y_train_std = _handle_zeros_in_scale(np.std(y, axis=0), copy=False)
# Remove mean and make unit variance
y = (y - self._y_train_mean) / self._y_train_std
else:
shape_y_stats = (y.shape[1],) if y.ndim == 2 else 1
self._y_train_mean = np.zeros(shape=shape_y_stats)
self._y_train_std = np.ones(shape=shape_y_stats)
by
self._y_train_mean = np.mean(y, axis=0)
self._y_train_std = _handle_zeros_in_scale(np.std(y, axis=0), copy=False)
# Normalize target value
if self.normalize_y:
# Remove mean and make unit variance
y = (y - self._y_train_mean) / self._y_train_std
I agree that the behavior described in the description looks like a bug.
I find it surprising because normalize_y=False is the default value of that parameter which means that most users of GaussianProcessRegressor would be impacted by this and get incorrect predictions as a result.
@Lollitor if you have time please open a PR to fix this and feel free to ping me on it for review.
TODO in the PR:
- make sure that existing tests pass with the fix, or find out if they need an update (and explain it in a comment in the PR);
- add a new non-regression test based on the code snippet provided in this issue;
- check the impact of this fix on the results (e.g. plots) when running all the examples that uses the
GaussianProcessRegressor. Those can be found with:
$ git grep GaussianProcessRegressor examples/
Ok perfect, thanks for the list and the feedback, I think tomorrow I will have time to work on it and do a PR as soon as possible!
Hi all, sorry for the delay. In the past few days, I worked on this, and after making the modifications we previously agreed on, the original bug is now solved. All 445 tests in the test suite pass, except for one:
def test_y_multioutput():
# Test that GPR can deal with multi-dimensional target values
y_2d = np.vstack((y, y * 2)).T
...
gpr = GaussianProcessRegressor(kernel=kernel, optimizer=None, normalize_y=False)
gpr.fit(X, y)
gpr_2d = GaussianProcessRegressor(kernel=kernel, optimizer=None, normalize_y=False)
gpr_2d.fit(X, y_2d)
y_pred_1d, y_std_1d = gpr.predict(X2, return_std=True)
y_pred_2d, y_std_2d = gpr_2d.predict(X2, return_std=True)
_, y_cov_1d = gpr.predict(X2, return_cov=True)
_, y_cov_2d = gpr_2d.predict(X2, return_cov=True)
assert_almost_equal(y_pred_1d, y_pred_2d[:, 0])
assert_almost_equal(y_pred_1d, y_pred_2d[:, 1] / 2)
# Standard deviation and covariance do not depend on output
for target in range(y_2d.shape[1]):
assert_almost_equal(y_std_1d, y_std_2d[..., target])
Correct me if I'm wrong, but this looks like the code initially posted to reproduce the bug, right? The standard deviation of the second output should scale according to the output when normalize_y=False. So I believe it's actually good that we’re getting a failure in this test.
Changing it to the following corrects the test:
for target in range(y_2d.shape[1]):
if target == 0:
assert_almost_equal(y_std_1d, y_std_2d[:, target])
elif target == 1:
assert_almost_equal(y_std_1d, y_std_2d[:, target] / 2)
Assuming I’ve got this right, could you please confirm? Meanwhile, I'll continue reviewing the test suite for other potential issues, even though the other tests pass, and I’ll also address points 2 and 3 from the TODO list by @ogrisel .
Sorry again for the delay!
Yes, I think you're fully right; we're getting a failure because now, with your fix, standard déviation and covariance depend on output. I think you have the same failure with the covariance, haven't?
You could also robustify this test function by testing not only the case where normalize_y=False but also normalize_y=True.
Hi everyone. Thanks for the suggestion. The covariance was calculated but not used for any assertion, so I added it. I also added a test for normalize_y=True to enhance robustness as suggested. Additionally, I incorporated a test inspired by the code snippet provided in the issue, even tho I believe it could be a bit redundant with the new version of test_y_multioutput. All the tests in the test suite passed, and I verified that test_y_multioutput is the only test that checks standard deviation with n_features > 1 and normalize_y=False. I also ran all the examples using GaussianProcessRegressor and compared the plots before and after the fix, without noticing any significant differences. I will provide all the details in the PR I plan to submit tomorrow. Thanks again!