aeon icon indicating copy to clipboard operation
aeon copied to clipboard

[ENH] implemented anomaly detection _fit_predict override output checks

Open notaryanramani opened this issue 6 months ago • 4 comments

Reference Issues/PRs

Fixes #2801

What does this implement/fix? Explain your changes.

Checks if _fit_predict is overridden by inheriting class and produces expected results as fit().predict()

Does your contribution introduce a new dependency? If yes, which one?

No

Any other comments?

PR checklist

For all contributions
  • [ ] I've added myself to the list of contributors. Alternatively, you can use the @all-contributors bot to do this for you after the PR has been merged.
  • [x] The PR title starts with either [ENH], [MNT], [DOC], [BUG], [REF], [DEP] or [GOV] indicating whether the PR topic is related to enhancement, maintenance, documentation, bugs, refactoring, deprecation or governance.
For new estimators and functions
  • [ ] I've added the estimator/function to the online API documentation.
  • [ ] (OPTIONAL) I've added myself as a __maintainer__ at the top of relevant files and want to be contacted regarding its maintenance. Unmaintained files may be removed. This is for the full file, and you should not add yourself if you are just making minor changes or do not want to help maintain its contents.
For developers with write access
  • [ ] (OPTIONAL) I've updated aeon's CODEOWNERS to receive notifications about future changes to these files.

notaryanramani avatar May 18 '25 18:05 notaryanramani

Thank you for contributing to aeon

I have added the following labels to this PR based on the title: [ $\color{#FEF1BE}{\textsf{enhancement}}$ ]. I have added the following labels to this PR based on the changes made: [ $\color{#2C2F20}{\textsf{testing}}$ ]. Feel free to change these if they do not properly represent the PR.

The Checks tab will show the status of our automated tests. You can click on individual test runs in the tab or "Details" in the panel below to see more information if there is a failure.

If our pre-commit code quality check fails, any trivial fixes will automatically be pushed to your PR unless it is a draft.

Don't hesitate to ask questions on the aeon Slack channel if you have any.

PR CI actions

These checkboxes will add labels to enable/disable CI functionality for this PR. This may not take effect immediately, and a new commit may be required to run the new configuration.

  • [ ] Run pre-commit checks for all files
  • [ ] Run mypy typecheck tests
  • [ ] Run all pytest tests and configurations
  • [ ] Run all notebook example tests
  • [ ] Run numba-disabled codecov tests
  • [ ] Stop automatic pre-commit fixes (always disabled for drafts)
  • [ ] Disable numba cache loading
  • [ ] Push an empty commit to re-run CI checks

aeon-actions-bot[bot] avatar May 18 '25 18:05 aeon-actions-bot[bot]

@MatthewMiddlehurst

LOF does not seem to be producing same output for fit_predict() as compared to fit().predict(). I have experimented locally as well, the prediction for anomaly scores are not same.

est = LOF(leaf_size=10, n_neighbors=5, stride=2)
est1 = _clone_estimator(est, random_state=42)
est2 = _clone_estimator(est, random_state=42)

datatype = 'UnivariateSeries-None'
X = FULL_TEST_DATA_DICT[datatype]['train'][0]
y = FULL_TEST_DATA_DICT[datatype]['train'][1]

est1.fit(X, y)
y_pred = est1.predict(X)
y_pred

>>> array([0.99658101, 0.99658101, 0.98995043, 0.98995043, 0.99216063,
       0.99216063, 0.98995043, 0.98995043, 0.99127655, 0.99127655,
       0.98862432, 0.98862432, 0.98995043, 0.98995043, 0.98774024,
       0.98774024, 0.98995043, 0.98995043, 0.98331986, 0.98331986])

y_pred2 = est2.fit_predict(X, y)
y_pred2

>>> array([1.03501792, 1.03501792, 1.02085908, 1.02085908, 1.01276639,
       1.01276639, 1.00540476, 1.00540476, 1.00364001, 1.00364001,
       0.99330039, 0.99330039, 0.98995043, 0.98995043, 0.98774024,
       0.98774024, 0.98995043, 0.98995043, 0.98331986, 0.98331986])

np.allclose(y_pred, y_pred2)

>>> False

notaryanramani avatar May 28 '25 12:05 notaryanramani

Thanks. I do not see anything wrong with the test so I assume this is a legitimate failure. It seems to be both the pyodadapter (using LOF?) and LOF.

Can either skip the test for now and create an issue or figure out why this is happening and fix.

May be of interest @SebastianSchmidl.

MatthewMiddlehurst avatar May 31 '25 18:05 MatthewMiddlehurst

The non-equal output for LOF et al. is expected because we assume that fit_predict(X) corresponds to an unsupervised usage-scenario and fit(X_train).predict(X_test) to a semi-supervised usage-scenario. LOF uses two different ways to compute the anomaly factor for in-training and out-of-training data (novelty prediction). See this discussion in the PR: https://github.com/aeon-toolkit/aeon/pull/2209#discussion_r1812167504

If fit(X_train).predict(X_train) should produce the same output as fit_predict(X), we would need to check if the data provided to predict is actually the training data, and thus also need to store the training data (again).

SebastianSchmidl avatar Jun 03 '25 16:06 SebastianSchmidl