kr521
kr521
It might be due to the fact that my Prometheus is using the NFS file system.
I changed the storage method to a local disk, but Coroot still reports a 422 error when detecting Prometheus, indicating that there is still an issue.  
prometheus with this error 2024/06/20 08:32:00 Unsolicited response received on idle HTTP channel starting with "\x00\x00\x06\x04\x00\x00\x00\x00\x00\x00\x05\x00\x00@\x00"; err= 2024/06/20 08:35:55 Unsolicited response received on idle HTTP channel starting with "\x00\x00\x06\x04\x00\x00\x00\x00\x00\x00\x05\x00\x00@\x00"; err=...
The issue might be in the getstatus function in db.go. The logic here should retrieve the last check status, but after adding "last_error != ''", it retrieves the last error...
Perhaps changing it to the following statement might fix the issue. `SELECT last_error FROM prometheus_query_state WHERE project_id = $1 ORDER BY last_ts DESC LIMIT 1 ;`
4o6ygm14|rate(container_dns_requests_total[$RANGE])|1719558780|422 Unprocessable Entity
I suspect the issue might still be related to the large amount of data in Prometheus. I added some logging in the code and didn't find any errors with the...
coroot-node-agent:1.20.2 coroot:1.2.1 > @kr521, which version of coroot-node-agent and coroot are you using?
  I thought it was due to the large amount of data from a certain Prometheus metric. I disabled some metrics, but the problem persists.
I thought the issue was caused by all the machine-IDs generated by KVM being the same, so I regenerated the machine-IDs for all machines, but the problem still exists.