Pending on Ingress
Hey folks. I am trying to deploy my standard test app with Acorn and I am failing to see the Ingress to stabilize (err .. work at all).
I am using an EKS cluster (1.21) with the AWS LB Controller.
I know the controller works because if I deploy my K8s YAML file it works just fine. This is the (pretty verbose) YAML I am using (also available here in the repo):
apiVersion: v1
kind: Service
metadata:
name: redis-server
labels:
app: redis-server
tier: cache
spec:
type: ClusterIP
ports:
- port: 6379
selector:
app: redis-server
tier: cache
---
apiVersion: v1
kind: Service
metadata:
name: yelb-db
labels:
app: yelb-db
tier: backenddb
spec:
type: ClusterIP
ports:
- port: 5432
selector:
app: yelb-db
tier: backenddb
---
apiVersion: v1
kind: Service
metadata:
name: yelb-appserver
labels:
app: yelb-appserver
tier: middletier
spec:
type: ClusterIP
ports:
- port: 4567
selector:
app: yelb-appserver
tier: middletier
---
apiVersion: v1
kind: Service
metadata:
name: yelb-ui
labels:
app: yelb-ui
tier: frontend
spec:
type: NodePort
ports:
- port: 80
selector:
app: yelb-ui
tier: frontend
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "yelb-ui"
annotations:
kubernetes.io/ingress.class: alb # check this, your ingress.class may be different
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
labels:
app: "yelb-ui"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "yelb-ui"
servicePort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: yelb-ui
spec:
replicas: 1
selector:
matchLabels:
app: yelb-ui
tier: frontend
template:
metadata:
labels:
app: yelb-ui
tier: frontend
spec:
containers:
- name: yelb-ui
image: mreferre/yelb-ui:0.7
ports:
- containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis-server
spec:
replicas: 1
selector:
matchLabels:
app: redis-server
tier: cache
template:
metadata:
labels:
app: redis-server
tier: cache
spec:
containers:
- name: redis-server
image: redis:4.0.2
ports:
- containerPort: 6379
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: yelb-db
spec:
replicas: 1
selector:
matchLabels:
app: yelb-db
tier: backenddb
template:
metadata:
labels:
app: yelb-db
tier: backenddb
spec:
containers:
- name: yelb-db
image: mreferre/yelb-db:0.5
ports:
- containerPort: 5432
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: yelb-appserver
spec:
replicas: 1
selector:
matchLabels:
app: yelb-appserver
tier: middletier
template:
metadata:
labels:
app: yelb-appserver
tier: middletier
spec:
containers:
- name: yelb-appserver
image: mreferre/yelb-appserver:0.5
ports:
- containerPort: 4567
This is the Acornfile I am trying to shape off of the above:
args: {
scale_ui: 2
scale_appserver: 3
}
containers: {
"yelb-ui": {
image: "mreferre/yelb-ui:0.7"
scale: args.scale_ui
ports:
publish: "80/http"
dependsOn: "yelb-appserver"
}
"yelb-appserver": {
image: "mreferre/yelb-appserver:0.5"
scale: args.scale_appserver
ports: "4567/tcp"
dependsOn: [
"redis-server",
"yelb-db"
]
}
"redis-server": {
image: "redis:4.0.2"
ports: "6379/tcp"
}
"yelb-db": {
image: "mreferre/yelb-db:0.5"
ports: "5432/tcp"
}
}
The stack seems to come up fine and all containers in the Acorn seem operational but I fail to see the Ingress being configured. This is how the two ingresses look like (one for the "native YAML" and one from the Acornfile):
$ kubectl get ingress -A
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
default yelb-ui <none> * k8s-default-yelbui-9a80765730-1696724264.us-west-2.elb.amazonaws.com 80 8m7s
red-flower-1efda903-e8c yelb-ui <none> yelb-ui.red-flower.pm64pd.alpha.on-acorn.io 80 104m
$
Am I doing something wrong in the Acornfile?
Can you give us the output of kubectl get ingress -A -o yaml? I want to compare the yaml for the two ingresses.
The fact that you got as far as you did means nothing is wrong with your acornfile. It created a proper ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "yelb-ui"
annotations:
kubernetes.io/ingress.class: alb # check this, your ingress.class may be different
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
labels:
app: "yelb-ui"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: "yelb-ui"
servicePort: 80
...but that should have a status field with a loadbalancer object. The fact that it doesn't means that the ingress controller isn't picking it up.
Normally, I would suspect you need to set the ingress class via:
acorn install --ingress-class-name alb
but your function ingress in the default namespace doesnt have a class set either. So that's why I want to compare the two. I suspect it has some annotations set on it that is allowing your ingress controller to pick it up.
Oooh, my bad, I misread your original post. That ingress yaml you posted is from your working manually deployed application.
Everything still holds true except this statement:
The fact that you got as far as you did means nothing is wrong with your acornfile. It created a proper ingress resource:
Can you still share the yaml output of both ingresses? I'm fairly certain it is going to be the absence of these annotations:
annotations:
kubernetes.io/ingress.class: alb # check this, your ingress.class may be different
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
And you can still try to see if
acorn install --ingress-class-name alb
resolves it, but that sets a field on the ingress called ingressClassName and your ingress controller's documentation indicates it uses the older annotation kubernetes.io/ingress.class, so I don't know if it will respect the ingressClassName field or not.
Thanks! Here is the output of the command:
$ kubectl get ingress -A -o yaml
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"alb.ingress.kubernetes.io/scheme":"internet-facing","alb.ingress.kubernetes.io/target-type":"ip","kubernetes.io/ingress.class":"alb"},"labels":{"app":"yelb-ui"},"name":"yelb-ui","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"yelb-ui","servicePort":80},"path":"/*"}]}}]}}
kubernetes.io/ingress.class: alb
creationTimestamp: "2022-08-17T18:53:41Z"
finalizers:
- ingress.k8s.aws/resources
generation: 1
labels:
app: yelb-ui
name: yelb-ui
namespace: default
resourceVersion: "74707"
uid: 416614cd-f889-45df-a39c-7d70d8cedcc3
spec:
rules:
- http:
paths:
- backend:
service:
name: yelb-ui
port:
number: 80
path: /*
pathType: ImplementationSpecific
status:
loadBalancer:
ingress:
- hostname: k8s-default-yelbui-9a80765730-1696724264.us-west-2.elb.amazonaws.com
- apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
acorn.io/targets: '{"yelb-ui.red-flower.pm64pd.alpha.on-acorn.io":{"port":80,"service":"yelb-ui"}}'
apply.acorn.io/applied: H4sIAAAAAAAA/4xSzW7bPBB8lQ97lpQ4+olN4Du0t6KXHnKrcliSa0sIRRLkyokh8N0LKnGd1k3RG6WZHc4sZ4GJGDUyglgArXWMPDob10/lgq1Gd8MYDsQRBCw9nMjIch6rQLrcG/dMofJT13hdofEDVs6W58EexNKDd4F7ENvboodI4ThCAei9OVU/L3DPlkJ5OD6BgNEyBYvmgh43xX9fR6v//+T9FxsZraKPNCxOBAIu5v5GjB5VZq/QR8Q4y1I5y/TCIABSASrQuqSHcaLIOHkQdjamAIOSzK+rQ+//7Ol3wpWXM2FCiwfSIIDDTO+RdZmKzvpvD3OdY8A4gID7TjZNfddp2naSdCu7u2bbtLVsd5t9e6+7VtdyV3c54pXke4uXIOWG9hp3t3VJW5XnoieV84fZUATxfYHBRb4o/UtroICB2WcZjzy8ykhUT2R1/vkWOx+vbOaurcA8SQq5dCmlYtUBATfwenw4+Tz2LdB+fIH0mNJj9s7I8/p6xqH+jCb3LIBYUko/AgAA//+uR/tTKAMAAA
apply.acorn.io/owner-gvk: internal.acorn.io/v1, Kind=AppInstance
apply.acorn.io/owner-name: red-flower
apply.acorn.io/owner-namespace: acorn
apply.acorn.io/owner-sub-context: ""
creationTimestamp: "2022-08-17T17:09:01Z"
generation: 1
labels:
acorn.io/app-name: red-flower
acorn.io/app-namespace: acorn
acorn.io/managed: "true"
acorn.io/service-name: yelb-ui
apply.acorn.io/hash: 76b44326de86bed5b6248453b591f57d65d3b936
name: yelb-ui
namespace: red-flower-1efda903-e8c
resourceVersion: "50442"
uid: 93ee9a37-7e23-4f65-900a-ede9a5f202a1
spec:
rules:
- host: yelb-ui.red-flower.pm64pd.alpha.on-acorn.io
http:
paths:
- backend:
service:
name: yelb-ui
port:
number: 80
path: /
pathType: Prefix
status:
loadBalancer: {}
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Note that I am using the YAML that configures the ALB to work with native pods IPs (a requirements for EKS/Fargate but it also works with EKS/EC2). I have another yaml (here) that does the same thing but leverages the traditional communication path through the EC2 host). I haven't tested it right now (but I can).
I assume that both annotations alb.ingress.kubernetes.io/target-type: ip and alb.ingress.kubernetes.io/scheme: internet-facing (default) are optional but perhaps Acorn needs to set the kubernetes.io/ingress.class: alb as a minimum?
Yep, it's because kubernetes.io/ingress.class: alb is missing.
AWS's docs:
The AWS Load Balancer Controller creates ALBs and the necessary supporting AWS resources whenever a Kubernetes ingress resource is created on the cluster with the kubernetes.io/ingress.class: alb annotation https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
Only chance of it working right now is if their ingress controller also happens to support the ingressClassName field that we set when you do acorn install --ingress-class-name alb. If that doesn't work, you'll need a code fix from us.
I am working on a feature to let you add arbitrary annotations to the resources created by an acorn app. But I might add a quicker/shorter-term fix to just set the kubernetes.io/ingress.class annotation if do --ingress-class-name.
Does your cluster have an ingressClass object?
kubectl get ingressclass -o yaml
It does.
$ kubectl get ingressclass -o yaml
apiVersion: v1
items:
- apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
annotations:
meta.helm.sh/release-name: aws-load-balancer-controller
meta.helm.sh/release-namespace: kube-system
creationTimestamp: "2022-08-17T13:45:53Z"
generation: 1
labels:
app.kubernetes.io/instance: aws-load-balancer-controller
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: aws-load-balancer-controller
app.kubernetes.io/version: v2.4.3
helm.sh/chart: aws-load-balancer-controller-1.4.4
name: alb
resourceVersion: "2664"
uid: 60a8e226-b1ed-4a69-b1d0-b57f421e6948
spec:
controller: ingress.k8s.aws/alb
kind: List
metadata:
resourceVersion: ""
selfLink: ""
I will play more tomorrow with acorn install --ingress-class-name alb and see what happens. Will report on this thread on the outcome.
I might add a quicker/shorter-term fix to just set the kubernetes.io/ingress.class annotation if do --ingress-class-name
Would that be a flag for the acorn run?
Thanks!
No, I was thinking that as part of acorn install --ingress-class-name alb, i would set both the ingressClassName field and the annotation. But after a little more reading, they arent quite equivalent, so I am now hesitant to do that.
Fortunately, it looks like acorn install --ingress-class-name alb will work since that IngressClass exists. ...or at least it will get further. Then we will discover whether the other alb.ingress.kubernetes.io annotations are required or optional :-)
Ah ok that makes sense, thanks. So a few updates.
First I have verified that the native YAML could work without the alb.ingress.kubernetes.io/target-type: ip so we know that annotation is not required.
I ran the acorn install --ingress-class-name alb which gave me all green BUT it did not move the needle on our problem. That is, if I try to deploy the app again the ingress remains pending as before.
I also went a step further by trying to patch the ingress and adding the two other annotations:
"kubernetes.io/ingress.class": "alb",
"alb.ingress.kubernetes.io/scheme": "internet-facing",
This did not change anything either. I suspect the two ingress (the one generated by Acorn and the one deployed with the native K8s YAML) are still different even after adding these two annotations (for example the Acorn spec section is different from the native spec section - below is the new Acorn output of the ingress with the two new annotations):
{
"apiVersion": "networking.k8s.io/v1",
"kind": "Ingress",
"metadata": {
"annotations": {
"kubernetes.io/ingress.class": "alb",
"alb.ingress.kubernetes.io/scheme": "internet-facing",
"acorn.io/targets": "{\"yelb-ui.misty-night.pm64pd.alpha.on-acorn.io\":{\"port\":80,\"service\":\"yelb-ui\"}}",
"apply.acorn.io/applied": "H4sIAAAAAAAA/4xSwY7TMBD9FTTnJNuyaUkscQBOCAlx2Bvdw9iZJtY6tmVPylaR/x053bKFqmhvTubNm3lv3gwjMXbICGIGtNYxsnY2Lp/KBVtpd8cYeuIIAuYdHMnIctLVqCMfS6v7gSs/bmvfVWj8gJWz5blzB2LegXeBdyCaVbGDSOEABaD35lj9GeB+WQplf3gCAdoyBYvmtXpYF+++adt9/OT9VxsZraJbHBZHAgEXy/0PGT2qDF9Kt4BxkqVylumZQQCkAlSgxaUHPVJkHD0IOxlTgEFJ5m/v0PsbS/2LuFrmDBjRYk8dCOAw0WUlu6kVnQe8nOZayIBxAAH7tsE9yVZJrFdt26z3uO3Upq337zfbbfuh3mzk/Wq1yhqvKC9XvFBS1mtZb5r7dSmbJjdGTyo7oG0fKMYvBmP8fiJDI6GAMBmKIH7OMLjIrzPelCgoYGD2eYBHHk48EtUT2S7/fHEkP68U5BwuhWmUFHIgU0rFwgMC7uD0fDj63PYj0F4/Q3pM6TGrYuRpuaxx2H1Gk0MYQMwppd8BAAD//zYIxCdFAwAA",
"apply.acorn.io/owner-gvk": "internal.acorn.io/v1, Kind=AppInstance",
"apply.acorn.io/owner-name": "misty-night",
"apply.acorn.io/owner-namespace": "acorn",
"apply.acorn.io/owner-sub-context": ""
},
"creationTimestamp": "2022-08-18T09:13:03Z",
"finalizers": [
"ingress.k8s.aws/resources"
],
"generation": 1,
"labels": {
"acorn.io/app-name": "misty-night",
"acorn.io/app-namespace": "acorn",
"acorn.io/managed": "true",
"acorn.io/service-name": "yelb-ui",
"apply.acorn.io/hash": "f98afeb9cba409981fa6dc594f256697455b3000"
},
"name": "yelb-ui",
"namespace": "misty-night-41b45831-b88",
"resourceVersion": "269264",
"uid": "ca84eceb-b279-427a-b8a6-fd8425a65449"
},
"spec": {
"ingressClassName": "alb",
"rules": [
{
"host": "yelb-ui.misty-night.pm64pd.alpha.on-acorn.io",
"http": {
"paths": [
{
"backend": {
"service": {
"name": "yelb-ui",
"port": {
"number": 80
}
}
},
"path": "/",
"pathType": "Prefix"
}
]
}
}
]
},
"status": {
"loadBalancer": {}
}
}
I have tried to patch the spec but I ended up in a rat-hole and I gave up.
Hm. Ok, maybe its / vs /* in the path, but that's grasping at straws. Wouldn't really make sense.
I'll try to spin up an EKS cluster and see if I can repro. Any particularly useful details about the cluster I should try to replicate or is it a relatively vanilla EKS cluster?
Thanks. I have deployed my cluster using this CDK Blueprint: https://github.com/aws-quickstart/cdk-eks-blueprints (simply because I needed an ingress and the Blueprint install the AWS LB ingress - along other things we probably don't need for this).
Never got testing this, but I'd like to see us revisit this sometime in october
Closing due to age. If you're still tracking this, please re-open