aeon
aeon copied to clipboard
[ENH] Add DL anomaly detection algorithm: LSTM-AD
Reference Issues/PRs
Issue: #1637
What does this implement/fix? Explain your changes.
Implemented LSTM-AD algorithm for time series anomaly detection.
Does your contribution introduce a new dependency? If yes, which one?
No, it uses tensorflow and scipy.
Thank you for contributing to aeon
I have added the following labels to this PR based on the title: [ $\color{#FEF1BE}{\textsf{enhancement}}$ ]. I have added the following labels to this PR based on the changes made: [ $\color{#6F6E8D}{\textsf{anomaly detection}}$ ]. Feel free to change these if they do not properly represent the PR.
The Checks tab will show the status of our automated tests. You can click on individual test runs in the tab or "Details" in the panel below to see more information if there is a failure.
If our pre-commit code quality check fails, any trivial fixes will automatically be pushed to your PR unless it is a draft.
Don't hesitate to ask questions on the aeon Slack channel if you have any.
PR CI actions
These checkboxes will add labels to enable/disable CI functionality for this PR. This may not take effect immediately, and a new commit may be required to run the new configuration.
- [ ] Run
pre-commitchecks for all files - [ ] Run
mypytypecheck tests - [ ] Run all
pytesttests and configurations - [ ] Run all notebook example tests
- [ ] Run numba-disabled
codecovtests - [ ] Stop automatic
pre-commitfixes (always disabled for drafts) - [ ] Disable numba cache loading
- [ ] Push an empty commit to re-run CI checks
I agree with @hadifawaz1999 and I do not see any reason to rush this in. So, if @itsdivya1309 agrees, we can first draft the base deep learning interfaces for anomaly detection before we implement the individual algorithms.
The anomaly_detection module is experimental, so there is no issue with deprecations right now.
I agree with @hadifawaz1999 and I do not see any reason to rush this in. So, if @itsdivya1309 agrees, we can first draft the base deep learning interfaces for anomaly detection before we implement the individual algorithms.
The
anomaly_detectionmodule is experimental, so there is no issue with deprecations right now.
I am in for the deep learning sub-module and would like to take up the task.
@itsdivya1309 Sure, happy to have somebody taking care of this! Do you want to create a draft first, and then meet with @hadifawaz1999 to discuss this draft in a dev meeting?
@itsdivya1309 Sure, happy to have somebody taking care of this! Do you want to create a draft first, and then meet with @hadifawaz1999 to discuss this draft in a dev meeting?
Yeah, sure, I am making the changes suggested by @hadifawaz1999 in this branch itself.
Hi @CodeLionX @hadifawaz1999, I’ve noticed a couple of tests are failing, and I’m unable to fix them. Could you please provide insights on these test failures and how to resolve them?
So basically i think the best thing to do is to keep in mind that if a deep anomaly detection model is forecasting based, like the one this pr implements (if i understand correctly) then at some point it would be better to implement it at as deep learning in forecasting module (once the new forecasting module is ready) and wrap it in anomaly_deteciton/deep_learning as anomaly detector with the whole config needed.
A lot of deep anomaly detection are self-supervised models, mostly representation learning, so given we are planning to start a self supervised module (soon) then these models will implemented in self supervised module and then wrapped into anomaly detection deep learning submodule with the whole config needed
Will take a look soon on the PR to see the network and all and get back to you
So basically i think the best thing to do is to keep in mind that if a deep anomaly detection model is forecasting based, like the one this pr implements (if i understand correctly) then at some point it would be better to implement it at as deep learning in forecasting module (once the new forecasting module is ready) and wrap it in anomaly_deteciton/deep_learning as anomaly detector with the whole config needed.
A lot of deep anomaly detection are self-supervised models, mostly representation learning, so given we are planning to start a self supervised module (soon) then these models will implemented in self supervised module and then wrapped into anomaly detection deep learning submodule with the whole config needed
Will take a look soon on the PR to see the network and all and get back to you
Thanks for the feedback! That sounds like a solid approach. I’ll also keep the self-supervised module in mind for models focused on representation learning.
Looking forward to your review and any specific feedback on the network design. Thanks again!