Results 28 comments of kr521

It might be due to the fact that my Prometheus is using the NFS file system.

I changed the storage method to a local disk, but Coroot still reports a 422 error when detecting Prometheus, indicating that there is still an issue. ![image](https://github.com/coroot/coroot/assets/3262751/dfba5070-ef7f-4dac-b730-ded0ce1d9d0f) ![image](https://github.com/coroot/coroot/assets/3262751/93eef1a8-3863-4002-bb84-270c1d31abcf)

prometheus with this error 2024/06/20 08:32:00 Unsolicited response received on idle HTTP channel starting with "\x00\x00\x06\x04\x00\x00\x00\x00\x00\x00\x05\x00\x00@\x00"; err= 2024/06/20 08:35:55 Unsolicited response received on idle HTTP channel starting with "\x00\x00\x06\x04\x00\x00\x00\x00\x00\x00\x05\x00\x00@\x00"; err=...

The issue might be in the getstatus function in db.go. The logic here should retrieve the last check status, but after adding "last_error != ''", it retrieves the last error...

Perhaps changing it to the following statement might fix the issue. `SELECT last_error FROM prometheus_query_state WHERE project_id = $1 ORDER BY last_ts DESC LIMIT 1 ;`

4o6ygm14|rate(container_dns_requests_total[$RANGE])|1719558780|422 Unprocessable Entity

I suspect the issue might still be related to the large amount of data in Prometheus. I added some logging in the code and didn't find any errors with the...

coroot-node-agent:1.20.2 coroot:1.2.1 > @kr521, which version of coroot-node-agent and coroot are you using?

![image](https://github.com/coroot/coroot/assets/3262751/e9772768-b91f-440d-b99d-cc53c47356e1) ![image](https://github.com/coroot/coroot/assets/3262751/2687c7c1-2551-4cb1-ae96-b9a81f8e00aa) I thought it was due to the large amount of data from a certain Prometheus metric. I disabled some metrics, but the problem persists.

I thought the issue was caused by all the machine-IDs generated by KVM being the same, so I regenerated the machine-IDs for all machines, but the problem still exists.