Adds support for IPv6 containers
Uses SERVICE_IPV6PORTS=port1/proto,port2/proto as metadata (because ipv6 ports aren't mapped and can't be found by inspecting the container)
The specified ports are mapped as ExposedPorts for that container. The service.ID is suffixed with ":ipv6" as not to clash with ipv4 listening on the same port.
What about something like SERVICE_<port>_IPV6=<tcp/udp>?
That's an option, although it wouldn't be consistent with the way SERVICE_
But if you want to have it implemented this way, I can modify it.
Depends on how you look at it. It's just extra meta data on port X.
On Sat, Aug 15, 2015 at 4:32 PM, 42wim [email protected] wrote:
That's an option, although it wouldn't be consistent with the way SERVICE__X behaves, as it only modifies ports that are exposed or explicitly published (which is not the case for IPv6).
But if you want to have it implemented this way, I can modify it.
— Reply to this email directly or view it on GitHub https://github.com/gliderlabs/registrator/pull/229#issuecomment-131449770 .
Jeff Lindsay http://progrium.com
Modified it to use SERVICE_<port>_IPV6=<tcp/udp>
If no protocol specified it listens on tcp and udp (eg for DNS)
assign @andyshinn
Rebased the patch, besides SERVICE_<port>_IPV6=<tcp/udp> it now adds support for the metadata SERVICE_NAME_IPV6=name and SERVICE_NAME_IPV4=name
When you specify SERVICE_NAME_IPV6 it will override SERVICE_NAME for IPv6 services. (SERVICE_NAME_IPV4 does the same for IPv4 services).
Use case: consul and elasticsearch with IPv4 and IPv6 nodes. The IPv4 nodes get port 19200 mapped (because of the NAT) and the IPv6 (no NAT) nodes are directly reachable by the default 9200 poort. If you got a single SERVICE_NAME=elastic you'll get IPv6 and IPv4 addresses but the ports differ for each stack.
Specifying SERVICE_NAME=elastic and SERVICE_NAME_IPV4=elastic-legacy now gives you elastic.service.consul containing only IPv6 addresses and elastic-legacy.service.consul containing the IPv4 services.
It doesn't look like this was rebased properly against our master. The branch is missing commits from about 20 days ago. Can you check again?
This seems to work as advertised. Here is a docker-compose.yml I used for testing:
consul:
image: gliderlabs/consul-server:0.5
net: "host"
ports:
- "8500:8500"
command: "-bootstrap-expect 1"
registrator:
build: ../..
volumes:
- /var/run/docker.sock:/tmp/docker.sock
net: "host"
command: consul://localhost:8500
redis:
image: redis
ports:
- "6379"
environment:
- SERVICE_6379_IPV6=tcp
- SERVICE_NAME_IPV6=redis6
SRV and AAAA records seems to be fine:
Andys-MBP:docker-redis andy$ dig +noall +answer +additional @192.168.99.101 -p 8600 redis6.service.consul SRV
redis6.service.consul. 0 IN SRV 1 1 6379 dev.node.dc1.consul.
dev.node.dc1.consul. 0 IN AAAA fe80::242:ac11:22
Andys-MBP:docker-redis andy$ dig +noall +answer +additional @192.168.99.101 -p 8600 redis6.service.consul AAAA
redis6.service.consul. 0 IN AAAA fe80::242:ac11:22
Andys-MBP:docker-redis andy$ dig +noall +answer +additional @192.168.99.101 -p 8600 redis6.service.consul A
HTTP API as well:
Andys-MBP:docker-redis andy$ curl http://192.168.99.101:8500/v1/catalog/service/redis6
[{"Node":"dev","Address":"10.0.2.15","ServiceID":"dev:consul_redis_1:6379:ipv6","ServiceName":"redis6","ServiceTags":null,"ServiceAddress":"fe80::242:ac11:22","ServicePort":6379}]
LGTM after proper rebase.
Also, it looks like you have a bunch of IPv6 stuff in for Consul in the last 15 days: https://github.com/hashicorp/consul/search?utf8=%E2%9C%93&q=author%3A42wim&type=Issues. Is it best to wait for a Consul release before releasing this in Registrator?
@andyshinn people not using ipv6 will not see any difference; people using it and waiting for the consul changes, will appreciate that the registrator already works with it :-)
Andy, I've rebased against master. I agree with @jovandeginste , the issues I fixed are issues you can mostly encounter when you go for IPv6-only services.
A little bit disappointing implementation, treating ipv6 as worse than ipv4
- you can discover exposed ports
- just slap them in with ipv6 address
- done
that should be default behaviour , not some cryptic env vars enabling it
handling bad containers should be by fixing them, or at worse having cryptic env vars
Well, you can only discover them if they are specified in the Dockerfile, which is not something you can assume for every image. Also the behaviour should be exactly the same for IPv4 and IPv6, can you give an example where IPv6 is treated worse?
@wim42 I think he means you may have to explicitly add _IPV6=... Although we tried to make sure you can have it with the least amount of 'extra' env vars.
@seeder the main concern here is to not break existing (v4) things. Perhaps you have an exported management port which is not published (mapped over v4) and you want it to remain like that over v6? Also consider that exposing ports is suddenly more documentation than necessity: any port in your container may be reachable over v6 depending on your ip6tables...
It might be useful to add a startup parameter to the registrator called 'ipv6-first' or similar, which does more or less what you suggest: register any exposed (!= published) ports over ipv6 unless some env var says not to.
@jovandeginste , Yes, that's what I mean.
If you are concerned about people breaking stuff, be aware that it's a conscious decision for them to switch IPv6 on in docker, so they know what they are doing and what can happen.
Having every exposed port published when using container internal is exactly what people want. This is consistent with current IPv4 behavior in docker. What ports to expose is not registrator domain, it's dockers, and they control it already. If you find IPv6 implementation in docker lacking features, address them in docker, not in registrator.
Ideally, having simple flag on registrator command line, to enable or disable use of ipv6/4 addresses when using internal addressing, should be the maximum extend of mandatory regulation here.
Environment vars like proposed above shall be optional, but at the end, any kind of limitation.
I would imagine you don't use just IPv4 or IPv6 across containers and so a simple switch might not be good enough. Definitely interested in simpler implementations though...
You don't need "OR" there, it can be "AND" , it's just a matter of treating them them both as first class citizens. On the other hand, nothing stops people spawning two registrators, one for different addressing and with different rules. After all, we are in the world of containers.
I've got to say, I was a bit surprised to find that my IPv6-enabled containers weren't having their exposed ports registered with the IPv6 address of the container. It's certainly the way I expected registrator to work.