ingress-gce
ingress-gce copied to clipboard
Applying backend-config annotation on existing ingress service has no effect
Issue
Applying cloud.google.com/backend-config annotation on an existing service, that is associated with an existing Ingress, makes no changes on underlying backend service.
Use cases
- configuring CloudArmor on GKE provided default backend service, that is
default-http-backendinkube-systemname space - configuring CloudArmor, IAP or other services on any existing service that is part of an Ingress but had no corresponding
BackendConfigobject
Steps to reproduce
- Create
Servicethat matches some existing deployment - Create
Ingressassociated with the service created above - Wait until ingress is created and in sync
- Create
BackendConfigwith Cloud Armor policy configuration (any other configuration will apply as well) - Annotate service with
cloud.google.com/backend-configannotation pointing toBackendConfigcreated in previous step
Expected Behavior
Cloud Armor policy is configured on a corresponding backend service
Actual Behavior
Nothing happens
GKE version
1.19.10-gke.1600
@bowei any idea about this?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
Alternatively, we should be able to disable the default backend. Very few people want or are aware that ingress-gce forwards all unmatched external traffic to a pod in kube-system. This is undesirable from a security standpoint.
/remove-lifecycle rotten
Default backend can be removed when all of the products support a 404 response instead of requiring a Pod to 404. -- however, that seems to not be related to the issue title?
We will take a look at this bug in the triage.
Does this only impact CloudArmor config or any other config in BackendConfig?
We had a problem with CloudArmor in that version you provided and it is since fixed.
@mikouaj I have the same problem
GKE : 1.21.5-gke.1302
Count me in on this problem as well. We can't seem to get a BackendConfig which has a securityPolicy attached to the Ingress (docs). So instead of relying on the BackendConfig we have to manually attach the policy.
We use GKE Autopilot if that matters.
{
"Major": "1",
"Minor": "20+",
"GitVersion": "v1.20.10-gke.1600",
"GitCommit": "ef8e9f64449d73f9824ff5838cea80e21ec6c127",
"GitTreeState": "clean",
"BuildDate": "2021-09-06T09:24:20Z",
"GoVersion": "go1.15.15b5",
"Compiler": "gc",
"Platform": "linux/amd64"
}
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/assign @spencerhance
@bowei: GitHub didn't allow me to assign the following users: spencerhance.
Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide
In response to this:
/assign @spencerhance
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Ack
/kind bug
We are running to a similar issue. We're required to attach security policies to all exposed backends, including the ones created via/for default http backend.
We're currently considering various "creative" solutions, but it would be a lot easier if it was fixed on the GCE Ingress level.
Thanks for looking into it.
Hi Folks, I attempted to repro this locally by adding a security policy and backendconfig to a service after the LB was provisioned - but I was unable to. If you share your redacted YAMLs or email your cluster info to the email on my profile I can take another look.
@spencerhance thanks for your comment/verification. It does seem to work for default backends when they are attached to healthy Ingresses!
For future reference the steps to have the policies attached to default backends.
-
Create a security policy for the default http backend in the gcp project
gcloud compute security-policies create default-http-backend -
Attach rule(s) to your security policy. (in this case we're attaching policy to deny all access by default and return 404)
gcloud compute security-policies rules update 2147483647 --security-policy default-http-backend --action "deny-404" -
Create
BackendConfigresource in thekube-systemnamespace# backend.yaml --- apiVersion: cloud.google.com/v1 kind: BackendConfig metadata: name: default-http-backend labels: app.kubernetes.io/name: default-http-backend spec: timeoutSec: 40 securityPolicy: name: default-http-backendkubectl apply -f backend.yaml -n kube-system -
Patch the default backend service to attach the required annotation
# annotation.yaml metadata: annotations: cloud.google.com/backend-config: | {"default":"default-http-backend"}kubectl patch service default-http-backend --patch-file annotation.yaml -n kube-system
just FYI, this process does not work for Ingresses which backends are in an unhealthy state
just FYI, this process does not work for Ingresses which backends are in an unhealthy state
@msuterski Can you elaborate, are the Backends unhealthy or is the ingress have a configuration issue that prevents it from being fully synced?
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.