consul
consul copied to clipboard
Dynamic tags applied like health checks.
In issue #867 I suggested an idea to make tags that depend on the result of scripts, just like health checks.
I run mongo in the cloud with multiple machines all spun up from the same image. On boot they will query for mongodb.service.consul and join the cluster. That all works flawlessly. In being a good Ops person I have a cron job that will kill random machines in my infrastructure at random times. It will eventually hit the mongodb master, the system will hiccup and a slave will be promoted automatically. Life is fantastic.
In comes Legacy Software that must connect directly to the master mongodb instance. I would like to have master.mongodb.service.consul resolve to the one IP of the master in the cluster.
Current solution (runs via cron on all machines):
- Get my service definition through API
- Check the status of the cluster. This determines if we should or should not have a tag.
- Determine if the service definition's tag list needs to be updated.
- If an update is required, POST data back to the API.
Ideal solution:
- Set up my service definition with dynamic tags.
- Write a script that returns the status of the cluster, with an exit code of 0 meaning to apply the tag.
- Let consul update itself automatically.
Sample JSON (one static tag, one dynamic tag):
{
"service": {
"name": "mongodb",
"tags": [
"fault-tolerant",
{
"name": "master",
"script": "/usr/local/bin/mongo-is-master.sh",
"interval": "10s"
}
],
"address": "127.0.0.1",
"port": 8000,
"checks": [
{
"script": "/usr/local/bin/mongo-health-check.sh",
"interval": "10s"
}
]
}
}
This sort of solution could apply to issue #155 and #867, and possibly other.
Interesting idea. I think the work-around you mentioned is a decent way of doing this, but I'm going to leave this open as a thought ticket for now. Thanks!
@fidian with respect to your statement: "On boot they will query for mongodb.service.consul and join the cluster."
Can you describe this a bit more, since I want to setup something similar for a redis cluster. Do you use some handcrafted script (e.g., via consul-tenplate or the REST API) for querying for mongodb.service.consul to get all registered nodes for that service or are you relying on the DNS mechanism for that? At least one problem with solely relying on the DNS mechanism is, that if the node registeres itself (e.g., with registrator) within the consul cluster before it does the DNS lookup for mongodb.service.consul it might get back its own IP address, which would not be helpful to join the cluster... :-)
This would useful for services like zookeeper which dynamically elects a leader node among themselves every time a node joins or leaves the cluster and the leader has the setting on so that it no longer accepts client connections. Having dynamic tags like this via check would make so I could query consul for the non-leader nodes and not have a client trying to connect to the leader at all.
@Kosta-Github asked how I manage to auto cluster my mongo instances.
- Consul is hooked up through dnsmasq.
- Consul is started before mongo.
- The health check fails unless mongo reports success and mongo is part of a cluster. This second part is vital - the health check fails until mongo is in a cluster.
- The init script for mongo queries DNS for other members in the cluster. This will only report mongo instances that are already in a replica set.
- If IPs are found, become a slave and connect to the IP that we found.
- With no IPs, configure as a master and enable the replica set, which then makes the health check pass.
The only snag is that I must start one instance of mongo initially so it will bootstrap the replica set. Once it is running I am able to add and remove instances to my replica set.
@fidian thanks for the explanation; just one more question: how does your dnsmasq
config look like? :-)
@Kosta-Github it looks like the following. I'd also answer questions off this issue. Feel free to email me directly at [email protected] so we don't continue to pollute this thread.
server=/consul./127.0.0.1#8600
+1 for this feature request
+1
+1
+1
+1
+1
This would be very very nice. There are all kinds of things for which clients need to connect the mast expliclty. A dynamic tag would be so elegant. So much better then a bunch of add scripts to tweak tags.
+1
+1
+1 tag plus script would be very usful to implement custom DNS response logic
Currently have to run two 'services' for a similar situation, have a "redis" service which includes all nodes in the cluster, then a "redis-master" service
This has the unfortunate side-effect of meaning most of the redis nodes are always 'failing' the health check because theyre not the master..
Would definitely appreciate this feature as a way around this
Consul 0.6 added a "tag override" feature that's useful for implementing schemes like this, though the logic is run outside of Consul, not from Consul itself as suggested here. Here's the issue that brought it in https://github.com/hashicorp/consul/issues/1102.
Here's a bit of the documentation, from https://www.consul.io/docs/agent/services.html:
The enableTagOverride can optionally be specified to disable the anti-entropy feature for this service. If enableTagOverride is set to TRUE then external agents can update this service in the catalog and modify the tags. Subsequent local sync operations by this agent will ignore the updated tags.
This would let an external agent like a script working with redis-sentinel to apply the tags to the current master via Consul's catalog API.
+1 Would love to see this instead of the workaround with tag overriding.
+1
+1
This is brilliant idea :), I would also want this for redis cluster!
+1 This would give us the ability to determine which application version should receive LB traffic in marathon.
+1
+1
Hi,
I've added support for dynamic tags here, branch dynamic-tags
.
If you are interested in this feature, please build and test it, any critique is appreciated. If everything is ok, I'll make a PR.
The syntacs for service registration is the following:
{
"service": {
"name": "mongodb",
"tags": ["tag1"],
"dynamictags": [
{
"name": "master",
"script": "/usr/local/bin/mongo-is-master.sh",
"interval": "10s"
}
],
"address": "127.0.0.1",
"port": 8000,
"checks": [
{
"script": "/usr/local/bin/mongo-health-check.sh",
"interval": "10s"
}
]
}
}
Was there ever a pull request for this topic. Still looks like something that was needed.
+1 This is much better than the current enableTagOverride or multiple service work around imho. Please pull this!
I've mergred master
branch from hashicorp/consul
into my dynamic-tags
branch. If you are interested in this feature, please, build and test it.
We've tested it in our environment and it worked. However, I'd like to receive more feedback before I make a PR. Error reports will be highly appreciated.
A colleague tried to build it and add to our internal debian repo but was apparently stuck in dependency hell and gave up.
Le mer. 14 déc. 2016 7:53 AM, Aleksandr Demakin [email protected] a écrit :
I've mergred master branch from hashicorp/consul into my dynamic-tags branch. If you are interested in this feature, please, build and test it. We've tested it in our environment and it worked. However, I'd like to receive more feedback before I make a PR. Error reports will be highly appreciated.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/hashicorp/consul/issues/1048#issuecomment-267026455, or mute the thread https://github.com/notifications/unsubscribe-auth/AGq6azO9gVfYXioHfjReVPKgKmwmTnHqks5rH-bHgaJpZM4FHT43 .