Improvement for k8s.io/docs/reference/networking/ports-and-protocols/
Suggestion for improvement: Include the outbound ports required by CoreDNS. Protocol | Direction | Port Range | Purpose | Used By TCP | Outbound | 8080 | CoreDNS | Control plane TCP | Outbound | 8181 | CoreDNS | Control plane
This issue is currently awaiting triage.
SIG Docs takes a lead on issue triage for this website, but any Kubernetes member can accept issues by applying the triage/accepted label.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Page with the issue is https://kubernetes.io/docs/reference/networking/ports-and-protocols
/language en
It sounds like there's a scenario that you want to make sure works OK @kel-bluehalo.
Is this about allowing CoreDNS to make an outbound connection on TCP/8080, or about allowing the control plane to act as a client of CoreDNS? Let's clarify the ask.
/triage needs-information
My scenario is running "kubeadmin init" on a fresh single-node machine. I'm not sure that my "Used By" field was correct; I only have one node acting as the control plane and the worker node.
This machine has a restrictive firewall, where the INPUT and OUTPUT tables in iptables have a default DROP policy.
The issue is that after starting flannel and doing basic setup to work with the single node, the coredns pods never go ready, and get stuck in a crash loop back off. On inspection, I'm seeing packets from 10.244.0.1 to coredns IP address 10.244.0.4, dest port 8181, are being dropped by the OUTPUT firewall table. If I add a rule to the firewall to ACCEPT traffic with dest port 8080 and 8181, the coredns pods go healthy.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Wanted to comment before this gets closed out, in case anyone else has a similar issue. I got Kubernetes running with the firewall running by adding the following rules.
Add rules like this: iptables -I INPUT -p tcp --dport 6443 -j ACCEPT -m comment --comment "Allow Kubernetes communication." iptables -I INPUT -p tcp --dport 30000:32767 -j ACCEPT -m comment --comment "Allow Kubernetes communication."
Add rules the ports mentioned on https://kubernetes.io/docs/reference/networking/ports-and-protocols: 6443, 2379, 2380, 10250, 10259, 10257, 10256, 30000:32767
For port 6443, also add rules for 6443 as the destination port "--dport", and for OUTPUT chain.
Also add the above for ipv6, with the ip6tables command.
What is missing from that page (besides the OUTPUT and ipv6 details) is the following. Add rules to allow traffic to
- Port 443 for your service IPs, i.e. 10.96.0.0/16, both as source and destination
- Port 443 for your pod subnet cidr, i.e. 10.244.0.0/16, both as source and destination
- Port 8080 for coredns
- Port 8181 for coredns
- INPUT and OUTPUT chain entries for internal loopback "-s 127.0.0.0/8 -d 127.0.0.0/8"
- Port 5353 for dns port (INPUT, OUTPUT, dport, and sport)
- Port 2381 for etcd health checks (kubectl describe your etcd pod to get this port) where loopback is the destination, i.e."-d 127.0.0.1 --dport 2381"
It would be nice to mention the above in the Kubernetes documentation.
Kubernetes Dashboard is well documented with ports 8443 and 8000.
FYI - Another issue I ran into was no clock on my air-gapped ARM-based debug board environment. My fake clock was going back in time up to an hour, so if I rebooted within an hour, my system clock went back in time before Kubernetes was installed and the certificates were stale.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.