prometheus-anomaly-detector
prometheus-anomaly-detector copied to clipboard
The value predicted by Prophet did not change much
The value predicted by Prophet did not change much
Here are my variable settings: FLT_PROM_URL = http://xx:xx:xx:xx:30090 FLT_PROM_ACCESS_TOKEN = xxxxx; FLT_METRICS_LIST = node_load1{instance='192.29.3.11:9100'}; FLT_RETRAINING_INTERVAL_MINUTES = 15; FLT_ROLLING_TRAINING_WINDOW_SIZE = 20d; FLT_PARALLELISM= 2;
I have deployed the anomaly detection service on k8s, and my prometheus can also discover the anomaly detection target。
However, I found that the yhat of the metrics output by anomaly detection is basically stable at around 0.6, and does not change with the original data.
The figure below is the predicted metrics versus the original metrics in grafana
Green bar is : node_load1{instance='192.29.3.11:9100'} (origin metrics)
yellow Bar is : yhat.
Orange Bar is :yhat_upper
Sky blue bar is : yhat_lower
Red Bar is : anomaly
How should I adjust the prophet parameters, or is there any other way to make Prophet predict the value accurately? please help me
Please, can we have a call to discuss about the set up of the environment
Please, can we have a call to discuss about the set up of the environment
That is to say, I have the above problem because of the wrong environment configuration?
not really, I just want to ask about something in the environment configuration, if you have a lit bit of time
not really, I just want to ask about something in the environment configuration, if you have a lit bit of time
You can ask here and I will try to help you if I know
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@sesheta: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.