ingress-gce icon indicating copy to clipboard operation
ingress-gce copied to clipboard

Applying backend-config annotation on existing ingress service has no effect

Open mikouaj opened this issue 4 years ago • 19 comments

Issue

Applying cloud.google.com/backend-config annotation on an existing service, that is associated with an existing Ingress, makes no changes on underlying backend service.

Use cases

  • configuring CloudArmor on GKE provided default backend service, that is default-http-backend in kube-system name space
  • configuring CloudArmor, IAP or other services on any existing service that is part of an Ingress but had no corresponding BackendConfig object

Steps to reproduce

  1. Create Service that matches some existing deployment
  2. Create Ingress associated with the service created above
  3. Wait until ingress is created and in sync
  4. Create BackendConfig with Cloud Armor policy configuration (any other configuration will apply as well)
  5. Annotate service with cloud.google.com/backend-config annotation pointing to BackendConfig created in previous step

Expected Behavior

Cloud Armor policy is configured on a corresponding backend service

Actual Behavior

Nothing happens

GKE version

1.19.10-gke.1600

mikouaj avatar Jul 02 '21 12:07 mikouaj

@bowei any idea about this?

boredabdel avatar Jul 02 '21 12:07 boredabdel

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 30 '21 14:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 30 '21 15:10 k8s-triage-robot

Alternatively, we should be able to disable the default backend. Very few people want or are aware that ingress-gce forwards all unmatched external traffic to a pod in kube-system. This is undesirable from a security standpoint.

jsravn avatar Nov 26 '21 13:11 jsravn

/remove-lifecycle rotten

bowei avatar Nov 30 '21 21:11 bowei

Default backend can be removed when all of the products support a 404 response instead of requiring a Pod to 404. -- however, that seems to not be related to the issue title?

bowei avatar Nov 30 '21 21:11 bowei

We will take a look at this bug in the triage.

bowei avatar Nov 30 '21 21:11 bowei

Does this only impact CloudArmor config or any other config in BackendConfig?

We had a problem with CloudArmor in that version you provided and it is since fixed.

freehan avatar Dec 07 '21 19:12 freehan

@mikouaj I have the same problem

GKE : 1.21.5-gke.1302

raphaelauv avatar Dec 30 '21 13:12 raphaelauv

Count me in on this problem as well. We can't seem to get a BackendConfig which has a securityPolicy attached to the Ingress (docs). So instead of relying on the BackendConfig we have to manually attach the policy.

We use GKE Autopilot if that matters.

{
  "Major": "1",
  "Minor": "20+",
  "GitVersion": "v1.20.10-gke.1600",
  "GitCommit": "ef8e9f64449d73f9824ff5838cea80e21ec6c127",
  "GitTreeState": "clean",
  "BuildDate": "2021-09-06T09:24:20Z",
  "GoVersion": "go1.15.15b5",
  "Compiler": "gc",
  "Platform": "linux/amd64"
}

edclement avatar Feb 11 '22 21:02 edclement

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 12 '22 22:05 k8s-triage-robot

/assign @spencerhance

bowei avatar May 12 '22 22:05 bowei

@bowei: GitHub didn't allow me to assign the following users: spencerhance.

Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. For more information please see the contributor guide

In response to this:

/assign @spencerhance

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar May 12 '22 22:05 k8s-ci-robot

Ack

spencerhance avatar May 12 '22 22:05 spencerhance

/kind bug

swetharepakula avatar May 20 '22 20:05 swetharepakula

We are running to a similar issue. We're required to attach security policies to all exposed backends, including the ones created via/for default http backend.

We're currently considering various "creative" solutions, but it would be a lot easier if it was fixed on the GCE Ingress level.

Thanks for looking into it.

msuterski avatar Jun 14 '22 19:06 msuterski

Hi Folks, I attempted to repro this locally by adding a security policy and backendconfig to a service after the LB was provisioned - but I was unable to. If you share your redacted YAMLs or email your cluster info to the email on my profile I can take another look.

spencerhance avatar Jun 14 '22 23:06 spencerhance

@spencerhance thanks for your comment/verification. It does seem to work for default backends when they are attached to healthy Ingresses!

For future reference the steps to have the policies attached to default backends.

  1. Create a security policy for the default http backend in the gcp project

    gcloud compute security-policies create default-http-backend

  2. Attach rule(s) to your security policy. (in this case we're attaching policy to deny all access by default and return 404)

    gcloud compute security-policies rules update 2147483647 --security-policy default-http-backend --action "deny-404"

  3. Create BackendConfig resource in the kube-system namespace

    # backend.yaml
    ---
    apiVersion: cloud.google.com/v1
    kind: BackendConfig
    metadata:
      name: default-http-backend
      labels:
        app.kubernetes.io/name: default-http-backend
    spec:
      timeoutSec: 40
      securityPolicy:
        name: default-http-backend
    

    kubectl apply -f backend.yaml -n kube-system

  4. Patch the default backend service to attach the required annotation

    # annotation.yaml
    metadata:
      annotations:
        cloud.google.com/backend-config: |
          {"default":"default-http-backend"}
    

    kubectl patch service default-http-backend --patch-file annotation.yaml -n kube-system

just FYI, this process does not work for Ingresses which backends are in an unhealthy state

msuterski avatar Jul 01 '22 17:07 msuterski

just FYI, this process does not work for Ingresses which backends are in an unhealthy state

@msuterski Can you elaborate, are the Backends unhealthy or is the ingress have a configuration issue that prevents it from being fully synced?

spencerhance avatar Jul 14 '22 18:07 spencerhance

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 13 '22 19:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Sep 12 '22 20:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 12 '22 20:09 k8s-ci-robot