TranAD
TranAD copied to clipboard
[VLDB'22] Anomaly Detection using Transformers, self-conditioning and adversarial training.
Hi, Thanks for the interesting paper and organized codes. Could you please let me know if it is possible to run the code for the case of more than 1...
Hi @shreshthtuli, Great work and thanks for making the benchmarks publically available. I have a couple of doubts -- 1. On [line number 32 of main.py](https://github.com/imperial-qore/TranAD/blob/main/main.py#L32), it appears to me...
Hello, Thanks for sharing the code. I have a question about the focus score. According to the paper, the shape of the focus score should be the same as W....
Hello, you mention in multiple GitHub issues (#10 #18 #7 #11 ) that you intend to release another branch which includes the final models, loss functions, masks, and tuned hyperparameters....
https://github.com/imperial-qore/TranAD/blob/c7d8befefeb609ba823f5522b53a8262d00c2948/preprocess.py#L147 Actually, the additional normalization in SMAP dataset is not necessary as it is already normalized (as mentioned here: https://github.com/khundman/telemanom#raw-experiment-data). Moreover, the results are better when your normalization is turned...
Thanks a lot for sharing the great paper and the code. While I read "The above operations generate attention weights using the input time- series windows and the complete sequence...
It seems like the result is very sensitive to random seed. If you set [This line](https://github.com/imperial-qore/TranAD/blob/34c6d532ff3215189a7840f08fb74027b6a108ee/src/models.py#L12) to be `torch.manual_seed(11)` and run TranAD model on UCR dataset. You will actually get...
Thanks for releasing the code! However, when I reproduce the results, such as TranAD method on SMAP dataset, the result is {'FN': 0, 'FP': 182, 'Hit@100%': 1.0, 'Hit@150%': 1.0, 'NDCG@100%':...
First of all, thank you for sharing your works. There are questions regarding your implementations of the baseline models, e.g., OmniAnomaly, DAGMM and GDN. It seems that your implementations do...
Hello author, I would like to ask you where the part of your paper about meta-learning is reflected in the code?