kubernetes-ingress-controller
kubernetes-ingress-controller copied to clipboard
Create both Ingress and Kong TcpIngress with same service:port cause sync rule failure.
Is there an existing issue for this?
- [X] I have searched the existing issues
Current Behavior
ingress controller sync ingress failed, with msg: "failed to sync: insert upstream into state: inserting upstream example-svc.default.8080.svc: entity already exists"
Expected Behavior
ingress sync success OR webhook validate this case and resource create failure.
Steps To Reproduce
step 1:
create an KongTcpIngress.
spec:
rules:
- backend:
serviceName: example-svc
servicePort: 8080
port: 8080
step 2:
create an ingress relating to service example-svc:8080
.
spec:
rules:
- host: example.com
http:
paths:
- backend:
service:
name: example-svc
port:
number: 8080
Kong Ingress Controller version
v1.3.1
Kubernetes version
1.20
Anything else?
No response
Could you please provide some more information of your environment?
- What was your kong version? Or, what was the image used for
proxy
container in your kic pod? - What was the exact image is used for
ingress-controller
pod? - How was the kubernetes cluster deployed?
- Could you access the
ingress
and/ortcpIngress
in the normal way? If not, what happened when you access them?
I tried to reproduce the bug with this all-in-one manifest in v1.3.1: https://github.com/Kong/kubernetes-ingress-controller/blob/1.3.1/deploy/single/all-in-one-dbless.yaml, but I did not find the error log, and the ingress could be normally accessed by curl -H"Host:example.com" http://$PROXY_IP
using your example ingress spec. Also, using the latest version of KIC(2.3.1) in kubernetes 1.23 is OK for your situation, where the ingress is correctly synced.
My environment:
- k8s version:
v1.20.13
installed bysnap install microk8s --classic --channel=1.20/stable
- KIC version:
v1.3.1
, imagekong/kubernetes-ingress-controller:1.3
- kong version:
v2.4.1
,imagekong:2.4
Could you please provide some more information of your environment?
- What was your kong version? Or, what was the image used for
proxy
container in your kic pod?- What was the exact image is used for
ingress-controller
pod?- How was the kubernetes cluster deployed?
- Could you access the
ingress
and/ortcpIngress
in the normal way? If not, what happened when you access them?I tried to reproduce the bug with this all-in-one manifest in v1.3.1: https://github.com/Kong/kubernetes-ingress-controller/blob/1.3.1/deploy/single/all-in-one-dbless.yaml, but I did not find the error log, and the ingress could be normally accessed by
curl -H"Host:example.com" http://$PROXY_IP
using your example ingress spec. Also, using the latest version of KIC(2.3.1) in kubernetes 1.23 is OK for your situation, where the ingress is correctly synced. My environment:
- k8s version:
v1.20.13
installed bysnap install microk8s --classic --channel=1.20/stable
- KIC version:
v1.3.1
, imagekong/kubernetes-ingress-controller:1.3
- kong version:
v2.4.1
,imagekong:2.4
Yes, I try with the dbless manifest, no error product. But when I try with all-in-one-postgres manifest(https://github.com/Kong/kubernetes-ingress-controller/blob/1.3.1/deploy/single/all-in-one-postgres.yaml), error reproduct again. I use kong ingress with postgress db, so I think maybe the issue is only occur on db mode.
full ingress yaml is here:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example
annotations:
kubernetes.io/ingress.class: kong
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example-svc
port:
number: 8080
---
apiVersion: configuration.konghq.com/v1beta1
kind: TCPIngress
metadata:
name: eexample
annotations:
kubernetes.io/ingress.class: kong
spec:
rules:
- backend:
serviceName: example-svc
servicePort: 8080
port: 8080
This issue would not happen in the latest version, so please upgrade your KIC version to the latest(2.3.1, or the soon coming 2.4.0)
apiVersion: configuration.konghq.com/v1beta1 kind: TCPIngress metadata: name: eexample annotations: kubernetes.io/ingress.class: kong spec: rules: - backend: serviceName: example-svc servicePort: 8080 port: 8080
It is not easy to upgrade to latest version immediately, Could you please tell me which version the issue fixed? So I can estimate if I can upgrade to that version, or cherry-pick the commit to out build.
I believe the fix in question is https://github.com/Kong/kubernetes-ingress-controller/commit/e1d68222b0e6cf669c49d51d007e9a42f48925f0, which was first in 2.0.0.
Going from 1.x to 2.x is the major hurdle, so once you've handled that you should likely be able to go to the latest 2.x release (do note that you will need to upgrade to 2.0 before upgrading to any later 2.x version. Does that cover the information you need, and do you have any questions about later versions' compatibility with your configuration?
Thanks, I cherry-pick to 1.3.2, It works. The Guides is exhaustive! But since 1.3.2 is large in use in our product environment, So is that a way to deploy both 1.x and 2.0 version, And version 2.0 apply rule to a not in use kong proxy, And then compare their service、route、upstream, if there is no diff, then it's safe to upgrade to 2.x?
I'd only recommend running two controller instances configured for the same class in a test environment. I don't think there's an obvious problem with doing so, but it's not something we design for and the controllers do make updates to the resource status information, which could result in problems if anything else (e.g. automatic DNS configuration) relies on it.
Offhand I don't think there's anything that would meaningfully impact the generated Kong configuration at this point--any bugs stemming from the architecture changes for 2.x should be worked out by this point. Changes are more to the controller CLI and supporting configuration rather than to the generated Kong resources.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
I'd only recommend running two controller instances configured for the same class in a test environment.
What if we use not only two controller, but also two kong proxy with two db?
I have tested that in our develop k8s cluster. we created a new namespace for the new controller and new proxy. until now only problem I found is the two version controller modify same ingress resource‘s status for update ingress ip, but it can solve by set update-status to false.
Can you please tell me why do you not recommend use upgrade by this way? what's the problem with it? I explain why we like the two version way
: because we can upgrade controller
by Canary testing
, it's safety to rollback. Also, we can use that way to upgrade kong proxy
(data plane) in the feature. so it's interesting for us.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
It's simply not something that we design for or officially support. It may work fine or with minimal issues now, but we do not attempt to guarantee that it will. Should we add something that breaks it more seriously in the future, we would likely not provide a fix for it.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
It's simply not something that we design for or officially support. It may work fine or with minimal issues now, but we do not attempt to guarantee that it will. Should we add something that breaks it more seriously in the future, we would likely not provide a fix for it.
Got it, Thank you very much.