Custom TTL not working for Route53 records
What happened: I believe someone already reported it before #2271 , but it got no traction and the bot marked it as resolved.
But this issue still occurs in my environment; in the Ingress, I have set it to 60, but in Route 53, it still remains set to the default value of 300.
What you expected to happen:
When I set external-dns.alpha.kubernetes.io/ttl: "60", the value in Route 53 should change to 60.
How to reproduce it (as minimally and precisely as possible):
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: echoserver
namespace: nm-001
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
external-dns.alpha.kubernetes.io/aws-weight: "0"
external-dns.alpha.kubernetes.io/set-identifier: green-echoserver
external-dns.alpha.kubernetes.io/ttl: "1m"
spec:
ingressClassName: alb
rules:
- host: "*.mydomain.com"
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echoserver
port:
number: 8080
Environment:
- External-DNS version: 0.14.0
- DNS provider: AWS Route53
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Met same issue external-dns.alpha.kubernetes.io/ttl: "180", it does not update.
Route 53 is still using default 300 seconds. 😔
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hm-airbyte-ingress
namespace: production-hm-airbyte
annotations:
kubernetes.io/ingress.class: traefik
# https://doc.traefik.io/traefik/routing/providers/kubernetes-ingress/#on-ingress
traefik.ingress.kubernetes.io/router.entrypoints: websecure
traefik.ingress.kubernetes.io/router.tls: "true"
# https://kubernetes-sigs.github.io/external-dns/latest/annotations/annotations
external-dns.alpha.kubernetes.io/hostname: hm-airbyte.hongbomiao.com
external-dns.alpha.kubernetes.io/ttl: "180"
# https://cert-manager.io/docs/usage/ingress/#supported-annotations
cert-manager.io/cluster-issuer: production-lets-encrypt-cluster-issuer
# https://argo-cd.readthedocs.io/en/stable/user-guide/resource_hooks
argocd.argoproj.io/hook: PostSync
labels:
app.kubernetes.io/name: hm-airbyte-ingress
app.kubernetes.io/part-of: production-hm-airbyte
spec:
rules:
- host: hm-airbyte.hongbomiao.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hm-airbyte-airbyte-webapp-svc
port:
number: 80
tls:
- hosts:
- hm-airbyte.hongbomiao.com
secretName: production-hm-airbyte-certificate
Any update on this? Custom TTL is not working for Route 53. Always it has the default value 300
I can confirm that I've just observed this behaviour with external-dns v0.15.0.
I am experiencing the same issue, Any update on this issue?
/lifecycle frozen
/good-first-issue
@ivankatliarchuk: This request has been marked as suitable for new contributors.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.
In response to this:
/good-first-issue
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
I can take a look at this
/assign
/unassign
/assign
I did run some tests. Not for ingress, but for services at the moment
/assign
Here is my fixtures
---
apiVersion: v1
kind: Namespace
metadata:
name: extdns
---
apiVersion: v1
kind: Service
metadata:
name: nginx-v1
namespace: extdns
annotations:
dns.company.com/label: controllertest-v1
dns.issue/type: issue-4292
external-dns.alpha.kubernetes.io/hostname: nginx-v1.a1.ex.com
spec:
type: ClusterIP
clusterIP: None
ports:
- port: 80
name: http
targetPort: 80
selector:
app: nginx
---
apiVersion: v1
kind: Service
metadata:
name: nginx-v2
namespace: extdns
annotations:
dns.company.com/label: controllertest-v2
dns.issue/type: issue-4292
external-dns.alpha.kubernetes.io/ttl: "1m"
external-dns.alpha.kubernetes.io/hostname: nginx-v2.a1.ex.com
spec:
type: LoadBalancer
allocateLoadBalancerNodePorts: true
ports:
- port: 80
name: http
targetPort: 80
selector:
app: nginx
---
apiVersion: v1
kind: Service
metadata:
name: nginx-v3
namespace: extdns
annotations:
dns.company.com/label: controllertest-v2
dns.issue/type: issue-4292
external-dns.alpha.kubernetes.io/ttl: "180"
external-dns.alpha.kubernetes.io/hostname: nginx-v3.a1.ex.com
spec:
type: LoadBalancer
allocateLoadBalancerNodePorts: true
ports:
- port: 80
name: http
targetPort: 80
selector:
app: nginx
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: extdns
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
ports:
- containerPort: 80
name: http
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
namespace: extdns
annotations:
external-dns.alpha.kubernetes.io/hostname: nginx-ingress.a1.ex.com
dns.issue/type: issue-4292
spec:
# ingressClassName: nginx
# rules:
# - host: hello-world.example
# http:
# paths:
# - path: /
# pathType: Prefix
# backend:
# service:
# name: nginx-v3
# port:
# number: 80
defaultBackend:
service:
name: for-ingress-backend
port:
number: 80
---
apiVersion: v1
kind: Service
metadata:
name: for-ingress-backend
namespace: extdns
annotations:
description: required-for-ingress-backend
spec:
selector:
app: nginx
type: NodePort
ports:
- protocol: TCP
port: 80
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-nginx-with-ingress
namespace: extdns
labels:
name: ingress-nginx-with-ingress
spec:
rules:
- host: nginx-ingress-v2.a1.ex.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-v3
port:
number: 80
Updated successfully
Need to validate for ingress. Will do when/if have time
I just tested ingress and custom TTL works
Closing. As confirmed that custom TTL works
/close
@ivankatliarchuk: Closing this issue.
In response to this:
Closing. As confirmed that custom TTL works
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.