dns_cluster
dns_cluster copied to clipboard
Add support for overriding the host of the node names
With #10 (#11) this would not be needed anymore, right? You could just omit the cluster domain in the hostname.
Not sure, does #11 result in a different host name of the node? I'm using currently #{ip}-#{namespace}.pod.cluster.local as the value for the RELEASE_NODE env variable (as seen in the example included in this PR. I assume that value would have to change as well then?
Oh right, I misunderstood your PR. Just out of curiosity: Why do you want to use such long node names? Does this not pay the bill for you?
env:
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: RELEASE_DISTRIBUTION
value: name
- name: RELEASE_NODE
value: myapp@$(MY_POD_IP)
- name: RELEASE_COOKIE
value: foo-cookie
edit: obviously you'd also pass the namespace in order to create your query. If the service is in the same namespace, you wouldn't have to do that IF #11 was merged ;) The query would then just be my-app-headless-service-name. i.e.:
{DNSCluster, query: "my-app-headless-service-name"}
But that currently doesn't work.
Oh right, I misunderstood your PR. Just out of curiosity: Why do you want to use such long node names? Does this not pay the bill for you?
env: - name: MY_POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: RELEASE_DISTRIBUTION value: name - name: RELEASE_NODE value: myapp@$(MY_POD_IP) - name: RELEASE_COOKIE value: foo-cookieedit: obviously you'd also pass the namespace in order to create your
query. If the service is in the same namespace, you wouldn't have to do that IF #11 was merged ;) The query would then just bemy-app-headless-service-name. i.e.:{DNSCluster, query: "my-app-headless-service-name"}But that currently doesn't work.
To be honest, I forgot why exactly 😅
I think it had to do with the fact that I had to do this so that I could connect remotely into the beam instance (I'd hard-edit the /etc/hosts). But probably there is a better way :)
Shall we close this one if there are other ways? Sorry for the delay. But if you have @mruoss here, you are in good hands :)
Shall we close this one if there are other ways? Sorry for the delay. But if you have @mruoss here, you are in good hands :)
Sure no problem :)!