Fake certificate error when we access the end points
Hi Team,
We have a GKE cluster running in us-east region & we can access the endpoints without any issues.
- Now we have created a new GKE cluster in us-central region.
- Taken a backup of 5 namespaces including nginx-ingress-controller.
- Restored all 5 namespace services in backup GKE cluster successfully & in running status.
- Now a new load balancer is created in us-central region with new IP.
- To test if we can access the service end points in the backup GKE cluster.
- We mapped the new IP of Load balancer (DNS IP change) & tried to access the end points it is giving "Kubernetes fake certificate error"
We compared the SSL certificates of both primary & backup GKE cluster, they are same no change in crt & key value. Kindly suggest what can be done further to access it with certificate error.
This issue is currently awaiting triage.
If Ingress contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/remove-kind bug /kind support
The information you have provided in the issue description is not enough for the project to analyze and relate to any code in the controller.
Hi Long Wu Yuan,
I have a GKE cluster in Project A region us-east4 I have created a backup GKE cluster in Project A region us-central1
I have taken a backup of 5 namespace [dev-abc , dev-jkl...., ingress] GKE cluster in Project A & restored in backup GKE cluster. All the pods are up & running successfully in backup GKE cluster.
Since, the pods are running we want to check if end points are working Say: I have abc-pod running with ingress: https://dev-abc.nprd.example.com/v1/health
Now, we have a load-balancer running in us-east4 region with IP 10.119.157.32 mapped to above ingress. Also, a new load balancer is created in us-central1 region during backup with IP 10.119.157.80
Now when I map the us-central LB IP:
- https://dev-abc.nprd.example.com/v1/health --> IP 10.119.157.80 We are getting Fake Kubernetes certificate error.
We need to know why the SSL cert in LB is throwing this error. We compared the abc-secret-web-tls (that is SSL) in us-east4 GKE & backup us-central GKE, they are same.
Still why are we getting this error? We have taken backup & mapped the new IP to service endpoints , but why do we get this error?
There is no data posted to analyze so nobody can make comments based on the analysis of data like logs or output of kubectl command or other commands.
Wait for readers here to make comments based on guess.
{ "textPayload": "local SSL certificate ingress/ingress-nginx-admission was not found", "insertId": "eej7jqucvc5kkcyx", "resource": { "type": "k8s_container", "labels": { "pod_name": "ingress-nginx-controller-6-pk", "container_name": "controller", "namespace_name": "ingress", "cluster_name": "cluster-dr", "location": "us-central1", "project_id": "project7" } }, "timestamp": "2025-03-11T14:05:59.284505797Z", "severity": "ERROR", "labels": { "compute.googleapis.com/resource_name": "gke-d-k-a9e2a9d2-7aib", "k8s-pod/helm_sh/chart": "ingress-nginx-4.11.2", "logging.gke.io/top_level_controller_type": "Deployment", "k8s-pod/app_kubernetes_io/version": "1.11.2", "k8s-pod/app_kubernetes_io/part-of": "ingress-nginx", "k8s-pod/app_kubernetes_io/instance": "ingress-nginx", "logging.gke.io/top_level_controller_name": "ingress-nginx-controller", "k8s-pod/app_kubernetes_io/component": "controller", "k8s-pod/pod-template-hash": "6656d5bc66", "k8s-pod/app_kubernetes_io/name": "ingress-nginx", "k8s-pod/app_kubernetes_io/managed-by": "Helm" }, "logName": "projects/project7/logs/stderr", "receiveTimestamp": "2025-03-11T14:06:00.391223763Z" }
But the ingress-nginx-admission is available
We are experiencing the same issue but in AKS. Certificates that were working on 4.11 stopped working when updating to 4.12, returning the Fake Certificate.
There is no data posted to analyze so nobody can make comments based on the analysis of data like logs or output of kubectl command or other commands.
What data do you need? There are so many things we could add.
You can answer questions that are asked in the template of new bug report, to begin with. That template was created for getting data.
Hi @longwuyuan You want me to give details in the template format? Do we have any update on this?
This fault happens on AWS with update from 4.11 to 4.12, 4.11 has CVE
10 lines of text as issue description informs readers of this issue that something happend.
Those 10 lines of text are not data that a reader can analyze to base any technical comment on.
Any new info here? I"m also running into this error when upgrading to 4.12.1
This fault happens on AWS with update from 4.11 to 4.12, 4.11 has CVE
You can upgrade to 4.11.5 which doesn't have the CVE
@allanian @sowr94 @swyarmsh
Is the issue fixed with version 4.11.5 ?
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev on Kubernetes Slack.