website icon indicating copy to clipboard operation
website copied to clipboard

Improvement for k8s.io/docs/reference/networking/ports-and-protocols/

Open kel-bluehalo opened this issue 1 year ago • 7 comments

Suggestion for improvement: Include the outbound ports required by CoreDNS. Protocol | Direction | Port Range | Purpose | Used By TCP | Outbound | 8080 | CoreDNS | Control plane TCP | Outbound | 8181 | CoreDNS | Control plane

kel-bluehalo avatar Nov 05 '24 20:11 kel-bluehalo

This issue is currently awaiting triage.

SIG Docs takes a lead on issue triage for this website, but any Kubernetes member can accept issues by applying the triage/accepted label.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Nov 05 '24 20:11 k8s-ci-robot

Page with the issue is https://kubernetes.io/docs/reference/networking/ports-and-protocols

/language en

steve-hardman avatar Nov 06 '24 01:11 steve-hardman

It sounds like there's a scenario that you want to make sure works OK @kel-bluehalo.

Is this about allowing CoreDNS to make an outbound connection on TCP/8080, or about allowing the control plane to act as a client of CoreDNS? Let's clarify the ask.

/triage needs-information

sftim avatar Nov 07 '24 17:11 sftim

My scenario is running "kubeadmin init" on a fresh single-node machine. I'm not sure that my "Used By" field was correct; I only have one node acting as the control plane and the worker node.

This machine has a restrictive firewall, where the INPUT and OUTPUT tables in iptables have a default DROP policy.

The issue is that after starting flannel and doing basic setup to work with the single node, the coredns pods never go ready, and get stuck in a crash loop back off. On inspection, I'm seeing packets from 10.244.0.1 to coredns IP address 10.244.0.4, dest port 8181, are being dropped by the OUTPUT firewall table. If I add a rule to the firewall to ACCEPT traffic with dest port 8080 and 8181, the coredns pods go healthy.

kel-bluehalo avatar Dec 11 '24 20:12 kel-bluehalo

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Mar 11 '25 21:03 k8s-triage-robot

/remove-lifecycle stale

kundan2707 avatar Mar 27 '25 09:03 kundan2707

Wanted to comment before this gets closed out, in case anyone else has a similar issue. I got Kubernetes running with the firewall running by adding the following rules.

Add rules like this: iptables -I INPUT -p tcp --dport 6443 -j ACCEPT -m comment --comment "Allow Kubernetes communication." iptables -I INPUT -p tcp --dport 30000:32767 -j ACCEPT -m comment --comment "Allow Kubernetes communication."

Add rules the ports mentioned on https://kubernetes.io/docs/reference/networking/ports-and-protocols: 6443, 2379, 2380, 10250, 10259, 10257, 10256, 30000:32767

For port 6443, also add rules for 6443 as the destination port "--dport", and for OUTPUT chain.

Also add the above for ipv6, with the ip6tables command.

What is missing from that page (besides the OUTPUT and ipv6 details) is the following. Add rules to allow traffic to

  • Port 443 for your service IPs, i.e. 10.96.0.0/16, both as source and destination
  • Port 443 for your pod subnet cidr, i.e. 10.244.0.0/16, both as source and destination
  • Port 8080 for coredns
  • Port 8181 for coredns
  • INPUT and OUTPUT chain entries for internal loopback "-s 127.0.0.0/8 -d 127.0.0.0/8"
  • Port 5353 for dns port (INPUT, OUTPUT, dport, and sport)
  • Port 2381 for etcd health checks (kubectl describe your etcd pod to get this port) where loopback is the destination, i.e."-d 127.0.0.1 --dport 2381"

It would be nice to mention the above in the Kubernetes documentation.

Kubernetes Dashboard is well documented with ports 8443 and 8000.

FYI - Another issue I ran into was no clock on my air-gapped ARM-based debug board environment. My fake clock was going back in time up to an hour, so if I rebooted within an hour, my system clock went back in time before Kubernetes was installed and the certificates were stale.

kel-bluehalo avatar Mar 27 '25 15:03 kel-bluehalo

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jun 25 '25 16:06 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Jul 25 '25 16:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Aug 24 '25 16:08 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Aug 24 '25 16:08 k8s-ci-robot