uptime-kuma
uptime-kuma copied to clipboard
Selection of dependent monitors
Description
Added the ability to choose which monitors the current monitor depends on. For example, if we have website monitoring and its functionality depends on the database server, in the event of a database server failure, the status of the website is automatically set to "DEGRADED".
TODO:
- [x] Automatic detection of the status of the slave monitor, according to the status of the master monitor
- [x] Optionally (do not) send notifications if the master monitor is PENDING, DOWN or DEGRADED and the slave monitor is also DOWN (and also if it was DOWN - DEGRADED and now it is UP)
Type of change
Please delete any options that are not relevant.
- User interface (UI)
- New feature (non-breaking change which adds functionality)
Checklist
- [x] My code follows the style guidelines of this project
- [x] I ran ESLint and other linters for modified files
- [x] I have performed a self-review of my own code and tested it
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] My changes generate no new warnings
- [ ] My code needed automated testing. I have added them (this is optional task)
Screenshots (if any)
When will this feature be added? It's amazing and I definitely must have 👍
@louislam
Hey,
I just tried using this feature, but it seems not to work.
I installed and started it with git clone -b dependent-monitors https://github.com/karelkryda/uptime-kuma && cd uptime-kuma && npm ci && npm run build && node server/server.js
And I got this error after I added a test monitor:
I just wanted to set up a blank monitor without any dependency
@mathiskir can you please try it now?
EDIT: please do clean install (make sure there are no data from previous try) and try simulate same situation
Works now.
@louislam pleaseee :p
Guys I need it! :)
@louislam Please 🙂
@louislam Thoughts on this? it's been open for 3 months.
@louislam, Should I prepare this PR for merge (resolve conflicts) or do you not plan to merge this PR now?
Hi @louislam, I'm sorry to ping you again, but I'm trying to keep these 2 PRs (#1236 and #1213) up-to-date with the master branch and I'd like to ask you if you have any guesses when you'll want these PR merges?
Thank you in advance
I didn't test it, but it is possible to have two monitors with are depended on each other?
For example, monitor x and y. x have set dependent monitor as y and y has dependent monitor set to x.
IMO this should not be possible
I didn't test it, but it is possible to have two monitors with are depended on each other?
For example, monitor x and y. x have set dependent monitor as y and y has dependent monitor set to x.
IMO this should not be possible
In fact, I think this is possible. I wonder if this is a problem or not, or how to solve it.
Just simply check (in addDependentMonitors
-> change name to checkAndAddDependentMonitors
maybe?) if requested dependent monitors
monitor_id: monitorID,
depends_on: monitor.id,
(depends_on
) already has set its own depends_on
to monitorID
?
@louislam Do you think this is okay for milestone 1.16.0 or 1.17.0?
@karelkryda, this feature looks amazing! May I make a few suggestions?
- I find your use of the word "master" confusing. Can I suggest maybe replace this wording: "Select the monitor(s) on which this monitor depends. If the master monitor(s) fails, this monitor will be affected too." with "Select the monitor(s) on which this monitor depends - if any dependency monitor fails, this monitor will be marked DEGRADED."
Also "Monitor is degraded, because at least one master monitor is pending, down or degraded" with "Monitor is degraded, because at least one dependency monitor is pending, down or degraded"
- Can you create a new "Virtual" (or maybe even Dummy or Master) check type that doesn't do anything itself, but just effectively reports on the status of dependencies? In other words, UP means all dependency monitors are UP, DEGRADED means 1 or more dependency monitors is down, and DOWN means all dependency monitors are DOWN. Even just pinging localhost wouldn't really accomplish this goal, since it would only show DEGRADED even if all the dependency monitors are DOWN.
This would be super helpful for a couple of scenarios:
- Say a host "webserver" quits answering pings, but is still responding to http requests. I want a "virtual" monitor that says that this host is degraded.
- Say I'm monitoring 3 physical locations (A, B, and C) with 3 hosts in each (A1, A2, A3, B1, B2, B3, C1, C2, C3). If location A is completely offline, end users won't understand or care about A1, A2, and A3 being offline - they just want to see a check that say "Location A" DOWN. There's no specific check that could tell if A is down - a user doesn't want to see a barrage of messages about various hosts being up or down, they want to see the status of that location.
@jeremyrnelson
-
Some time ago I renamed it from "dependent" to "master", because this is clearly indicated by the fact that we select monitors that will affect this monitor, not those that will be affected by this monitor. English is not my primary language and sometimes I'm not sure of the exact meaning.
-
I don't know if we fully understand each other, but I plan to create the ability to combine multiple monitors into one. For example: We will have 2 Docker servers with Portainer. We will want to show 2 Portainer monitors as one. It will therefore be possible to make a group of several monitors, which will act as one monitor.
Did this ever get added?
I have not tried this but would following be possible: A - Network B - Service C - Service
So lets say A goes down, currently I would get notification for A; B; C are being down. Now it would be nice if with this I could get 1 message saying that A is down and B;C are being affected.
The best example I have found is:
I have an UPTIME-KUMA hosted at home.
Since this UPTIME-KUMA I monitor my internet connection using a ping on 8.8.8.8 we will call this probe MAISON-ALIVE
Since this same UPTIME-KUMA I monitor my websites hosted in a data center in PARIS, we will call these probes sites-monitored, they will be children of MAISON-ALIVE
Everyone agrees with me that if MAISON-ALIVE is DOWN then:
-
monitored-sites is DOWN in UPTIME-KUMA but this is not 100% safe
-
The notification must be sent only for MAISON-ALIVE
-
The charts and event of sites-monitored must report it
@louislam Can the functionality be integrated ?
@louislam Is this feature soon to be integrated?
Interesting too.
This MR https://github.com/louislam/uptime-kuma/pull/2693 not does not concretely meet the parent-child need (hierarchy)
@louislam Can you give us some visibility for the parent-child concept?
PR #2693 is awesome but doesn't accomplish this, in fact it's almost the opposite effect.
- PR #2693: If ANY service in the group is down, the group monitor is alerts as down.
- PR #1236: If the group monitor is down (Gateway/public IP to a site, a server running multiple services, etc.), only alert that the group monitor is down, instead of the 1,2,5,10+ monitors that might exist underneath it.
@karelkryda Could you add The following to the description (allowing these to show up as having a PR ready to merge them ^^): Resolves #1089 Resolves #2261 Resolves #2348 Resolves #1887 Resolves #1534 Resolves #3238 Resolves #2335
(there are likely more that can be linked to the description, but these were the ones I found at a cursory glance)
(there are likely more that can be linked to the description, but these were the ones I found at a cursory glance)
Resolves #2487 Resolves #3548
Any news on this? It's a really interesting functionality and I'd love to be using it already 😇
Any update on this at all? This would be a great feature. Currently, I have a few services behind my reverse proxy and therefore get alerts for all of them when the reverse proxy goes down. It would be ideal if I could make all services a dependency of the reverse proxy such that when the proxy goes down, I only get an alert for the proxy itself rather than everything else behind it as well.
This PR is quite a way behind master and at this point is incompatible => would need a heavy rebase to be reviewable and would need to adress https://github.com/louislam/uptime-kuma/pull/2693#issuecomment-1413794719