kops
kops copied to clipboard
Poorly documented: nonMasqueradeCIDR vs podCIDR vs serviceClusterIPRange
Historically I have been setting nonMasqueradeCIDR to a value of my choice (e.g. 10.1.0.0/16) for the internal cluster address allocation to prevent any addresses from a non-private network space (100.64.0.0/whatever) from being used for any internal tasks.
It, however, failed to work for me when I tried to create a new cluster with kops 1.28.4: while debugging the CNI initialization issues, I noticed that there were multiple references to 100.64.x.x and 100.96.x.x addresses in the logs, so some behavior must have changed.
I then started to search on the internet what was going on only to discover that this is either not documented or is documented poorly and fragmentarily.
Here's what I was able to find so far:
- nonMasqueradeCIDR: network space that is to be accessed without masquerading
- podCIDR: network space for pod address allocation
- serviceClusterIPRange: network space for service address allocation
Confusing information: it was stated (somewhere) that nonMasqueradeCIDR either must not, or is not recommended to, overlap the other two. This does not make sense: why cannot non-masqueraded routing be used for internal addresses?! I found that at some point nonMasqueradeCIDR stopped to be used for deriving the address space for pods and services. What is then its purpose now?
Another confusion is the rules for podCIDR and serviceClusterIPRange: can they be the same? Can they overlap? What are their actual purpose? If they are both to be used to allocate addresses for pods and services in the internal k8s network, then why are they not called "podCIDR" and "serviceCIDR", or "podClusterIPRange" and "serviceClusterIPRange" to avoid confusion?
What is, generally, the recommended approach of setting a custom subnet for the internal k8s network addressing now?
None of this is documented properly, one would have to dig into the source code to understand it, or I must have failed miserably at searching for documentation.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten