kops icon indicating copy to clipboard operation
kops copied to clipboard

Poorly documented: nonMasqueradeCIDR vs podCIDR vs serviceClusterIPRange

Open shapirus opened this issue 1 year ago • 6 comments

Historically I have been setting nonMasqueradeCIDR to a value of my choice (e.g. 10.1.0.0/16) for the internal cluster address allocation to prevent any addresses from a non-private network space (100.64.0.0/whatever) from being used for any internal tasks.

It, however, failed to work for me when I tried to create a new cluster with kops 1.28.4: while debugging the CNI initialization issues, I noticed that there were multiple references to 100.64.x.x and 100.96.x.x addresses in the logs, so some behavior must have changed.

I then started to search on the internet what was going on only to discover that this is either not documented or is documented poorly and fragmentarily.

Here's what I was able to find so far:

  • nonMasqueradeCIDR: network space that is to be accessed without masquerading
  • podCIDR: network space for pod address allocation
  • serviceClusterIPRange: network space for service address allocation

Confusing information: it was stated (somewhere) that nonMasqueradeCIDR either must not, or is not recommended to, overlap the other two. This does not make sense: why cannot non-masqueraded routing be used for internal addresses?! I found that at some point nonMasqueradeCIDR stopped to be used for deriving the address space for pods and services. What is then its purpose now?

Another confusion is the rules for podCIDR and serviceClusterIPRange: can they be the same? Can they overlap? What are their actual purpose? If they are both to be used to allocate addresses for pods and services in the internal k8s network, then why are they not called "podCIDR" and "serviceCIDR", or "podClusterIPRange" and "serviceClusterIPRange" to avoid confusion?

What is, generally, the recommended approach of setting a custom subnet for the internal k8s network addressing now?

None of this is documented properly, one would have to dig into the source code to understand it, or I must have failed miserably at searching for documentation.

shapirus avatar May 21 '24 13:05 shapirus

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 19 '24 16:08 k8s-triage-robot

/remove-lifecycle stale

shapirus avatar Aug 19 '24 17:08 shapirus

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Nov 17 '24 17:11 k8s-triage-robot

/remove-lifecycle stale

shapirus avatar Nov 17 '24 17:11 shapirus

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 15 '25 18:02 k8s-triage-robot

/remove-lifecycle stale

shapirus avatar Feb 15 '25 19:02 shapirus

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 30 '25 17:07 k8s-triage-robot

/remove-lifecycle stale

shapirus avatar Jul 30 '25 21:07 shapirus

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 28 '25 22:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 27 '25 22:11 k8s-triage-robot

/remove-lifecycle rotten

shapirus avatar Nov 28 '25 07:11 shapirus