aeon icon indicating copy to clipboard operation
aeon copied to clipboard

[ENH] Add DL anomaly detection algorithm: LSTM-AD

Open itsdivya1309 opened this issue 1 year ago • 1 comments

Reference Issues/PRs

Issue: #1637

What does this implement/fix? Explain your changes.

Implemented LSTM-AD algorithm for time series anomaly detection.

Does your contribution introduce a new dependency? If yes, which one?

No, it uses tensorflow and scipy.

itsdivya1309 avatar Oct 01 '24 11:10 itsdivya1309

Thank you for contributing to aeon

I have added the following labels to this PR based on the title: [ $\color{#FEF1BE}{\textsf{enhancement}}$ ]. I have added the following labels to this PR based on the changes made: [ $\color{#6F6E8D}{\textsf{anomaly detection}}$ ]. Feel free to change these if they do not properly represent the PR.

The Checks tab will show the status of our automated tests. You can click on individual test runs in the tab or "Details" in the panel below to see more information if there is a failure.

If our pre-commit code quality check fails, any trivial fixes will automatically be pushed to your PR unless it is a draft.

Don't hesitate to ask questions on the aeon Slack channel if you have any.

PR CI actions

These checkboxes will add labels to enable/disable CI functionality for this PR. This may not take effect immediately, and a new commit may be required to run the new configuration.

  • [ ] Run pre-commit checks for all files
  • [ ] Run mypy typecheck tests
  • [ ] Run all pytest tests and configurations
  • [ ] Run all notebook example tests
  • [ ] Run numba-disabled codecov tests
  • [ ] Stop automatic pre-commit fixes (always disabled for drafts)
  • [ ] Disable numba cache loading
  • [ ] Push an empty commit to re-run CI checks

aeon-actions-bot[bot] avatar Oct 01 '24 11:10 aeon-actions-bot[bot]

I agree with @hadifawaz1999 and I do not see any reason to rush this in. So, if @itsdivya1309 agrees, we can first draft the base deep learning interfaces for anomaly detection before we implement the individual algorithms.

The anomaly_detection module is experimental, so there is no issue with deprecations right now.

SebastianSchmidl avatar Oct 14 '24 17:10 SebastianSchmidl

I agree with @hadifawaz1999 and I do not see any reason to rush this in. So, if @itsdivya1309 agrees, we can first draft the base deep learning interfaces for anomaly detection before we implement the individual algorithms.

The anomaly_detection module is experimental, so there is no issue with deprecations right now.

I am in for the deep learning sub-module and would like to take up the task.

itsdivya1309 avatar Oct 15 '24 04:10 itsdivya1309

@itsdivya1309 Sure, happy to have somebody taking care of this! Do you want to create a draft first, and then meet with @hadifawaz1999 to discuss this draft in a dev meeting?

SebastianSchmidl avatar Oct 15 '24 06:10 SebastianSchmidl

@itsdivya1309 Sure, happy to have somebody taking care of this! Do you want to create a draft first, and then meet with @hadifawaz1999 to discuss this draft in a dev meeting?

Yeah, sure, I am making the changes suggested by @hadifawaz1999 in this branch itself.

itsdivya1309 avatar Oct 15 '24 07:10 itsdivya1309

Hi @CodeLionX @hadifawaz1999, I’ve noticed a couple of tests are failing, and I’m unable to fix them. Could you please provide insights on these test failures and how to resolve them?

itsdivya1309 avatar Nov 07 '24 16:11 itsdivya1309

So basically i think the best thing to do is to keep in mind that if a deep anomaly detection model is forecasting based, like the one this pr implements (if i understand correctly) then at some point it would be better to implement it at as deep learning in forecasting module (once the new forecasting module is ready) and wrap it in anomaly_deteciton/deep_learning as anomaly detector with the whole config needed.

A lot of deep anomaly detection are self-supervised models, mostly representation learning, so given we are planning to start a self supervised module (soon) then these models will implemented in self supervised module and then wrapped into anomaly detection deep learning submodule with the whole config needed

Will take a look soon on the PR to see the network and all and get back to you

hadifawaz1999 avatar Nov 08 '24 13:11 hadifawaz1999

So basically i think the best thing to do is to keep in mind that if a deep anomaly detection model is forecasting based, like the one this pr implements (if i understand correctly) then at some point it would be better to implement it at as deep learning in forecasting module (once the new forecasting module is ready) and wrap it in anomaly_deteciton/deep_learning as anomaly detector with the whole config needed.

A lot of deep anomaly detection are self-supervised models, mostly representation learning, so given we are planning to start a self supervised module (soon) then these models will implemented in self supervised module and then wrapped into anomaly detection deep learning submodule with the whole config needed

Will take a look soon on the PR to see the network and all and get back to you

Thanks for the feedback! That sounds like a solid approach. I’ll also keep the self-supervised module in mind for models focused on representation learning.

Looking forward to your review and any specific feedback on the network design. Thanks again!

itsdivya1309 avatar Nov 08 '24 13:11 itsdivya1309