ingress-gce icon indicating copy to clipboard operation
ingress-gce copied to clipboard

Creating Network Endpoint Group (NEG) before creating workload

Open AlbertMoser opened this issue 2 years ago • 11 comments

It seems that NEGs for a service with annotations: exposed_ports are only created once there are pods matching the service, otherwise, we see correct 'neg-status' annotation (matching zones of cluster) but no NEGs. Only once Service matching PODs are deployed NEGs appear. However (and this is inconsistent) when pods are deleted, NEGs remain (but with 0 endpoints).

Steps to reproduce:

  1. Create a service with NEG annotation <insert proper NEG annotation here>
  2. Check neg-status annotation of service (it should be there)
  3. Check that no NEG was created, no NEG creation Event in service is visible
  4. Deploy pod(s) matching the service
  5. Now NEGs appear as well as NEG creation events in Service
  6. Delete pod(s) (so service has 0/0 pods)
  7. Note that NEGs have remained but have 0 endpoints.

Expected behavior:

  1. Create a service with NEG annotation <insert proper NEG annotation here>
  2. Zonal NEGs are created in zones matching cluster nodes (but remain empty, no endpoints)
  3. Deploy pod(s) matching the service
  4. Endpoints get added to NEGs
  5. Delete pod(s)
  6. NEGs remain but are empty

AlbertMoser avatar Jul 21 '22 11:07 AlbertMoser

Hi @AlbertMoser,

Thanks for creating the issue. Can you tell us more of why this behavior is affecting your use case and how creating empty NEGs before workloads help?

Thanks, Swetha

swetharepakula avatar Aug 29 '22 15:08 swetharepakula

/kind bug

swetharepakula avatar Aug 29 '22 15:08 swetharepakula

Hi @swetharepakula Thanks for looking into it. For us, this is an issue for us since the deployment of load balancers happens before the deployment of workloads. Moreover, both of them are decoupled. Since NEGs are an integral part of the load balancer, it does not work.

AlbertMoser avatar Sep 01 '22 13:09 AlbertMoser

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 30 '22 14:11 k8s-triage-robot

/remove-lifecycle stale

AlbertMoser avatar Dec 07 '22 11:12 AlbertMoser

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 07 '23 11:03 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Apr 06 '23 12:04 k8s-triage-robot

/remove-lifecycle rotten

AlbertMoser avatar May 02 '23 05:05 AlbertMoser

Any update ?

adaosantos avatar May 28 '23 22:05 adaosantos

Hi, sorry for the delay in response. I think I misunderstood the problem originally. The Neg controller will create a neg as soon as specified by the service regardless of whether workloads exist or not (matches with the expected behavior).

I am unable to reproduce this issue. In my test setup I create a service specifying a NEG, which triggers the NEG creation. There is an event emitted for NEG creation and I can verify through gcloud that a NEG exists with 0 endpoints.

Are you still seeing this issue? If yes, please provide the GKE version, and possible the yamls you are using. Thanks!

swetharepakula avatar Jul 10 '23 23:07 swetharepakula

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 24 '24 05:01 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Feb 23 '24 06:02 k8s-triage-robot

Closing this issue out for now. If the problem resurfaces please re-open the issue or consider opening a support ticket on GKE.

swetharepakula avatar Mar 07 '24 00:03 swetharepakula