k0sctl
k0sctl copied to clipboard
Using a custom service CIDR causes a SAN-related error in an install; it should not
- Create a cluster with a custom service CIDR
- Observe the x509 certificate generated for many IPs, but not the service IP for the
kubernetesservice:
From `kubectl describe pods -n kube-system coredns-7bf57bcbd8-vn668
Warning FailedCreatePodSandBox 11m kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to setup network for sandbox "dac76047cd6c4e0081af539dad1e60206ce4fb32e4b7ab86369e24eb54c11d55": plugin type="calico" failed (add): error getting ClusterInformation: Get "https://10.152.184.1:443/apis/crd.projectcalico.org/v1/clusterinformations/default": x509: certificate is valid for 127.0.0.1, 127.0.1.1, 10.2.0.20, fe80::6e02:e0ff:fe77:9cac, 10.2.0.19, 10.2.0.22, 10.96.0.1, not 10.152.184.1
- Observe
10.96.0.1, not 10.152.184.1indicates the default service CIDR 10.96.0.0/16 was used at the time the certificates were generated. - Observe calico does not accept a service CIDR configuration or something similar. It appears to be configured correctly.
- Observe this CIDR is not specified in the below configuration.
- Add the default kubernetes service ip to the sans:
sans:
# included from service CIDR to resolve calico issue
- 10.152.184.1
extraArgs:
service-cluster-ip-range: 10.152.184.0/24
- Observe installing the cluster with this choice resolves the issue.
k0sctl.yaml
apiVersion: k0sctl.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s-cluster
spec:
hosts:
- ssh:
address: 10.2.0.19
user: administrator
port: 22
role: controller
- ssh:
address: 10.2.0.20
user: administrator
port: 22
role: controller
- ssh:
address: 10.2.0.22
user: administrator
port: 22
role: controller
- ssh:
address: 10.2.0.65
user: administrator
port: 22
role: worker
k0s:
version: 1.26.2+k0s.0
dynamicConfig: true
config:
apiVersion: k0s.k0sproject.io/v1beta1
kind: Cluster
metadata:
name: k0s
spec:
api:
k0sApiPort: 9443
port: 6443
tunneledNetworkingMode: false
controllerManager:
extraArgs:
horizontal-pod-autoscaler-sync-period: 1s
extensions:
helm:
charts: null
repositories: null
storage:
create_default_storage_class: false
type: external_storage
images:
calico:
cni:
image: docker.io/calico/cni
version: v3.23.5
kubecontrollers:
image: docker.io/calico/kube-controllers
version: v3.23.5
node:
image: docker.io/calico/node
version: v3.23.5
coredns:
image: docker.io/coredns/coredns
version: 1.10.1
default_pull_policy: IfNotPresent
konnectivity:
image: quay.io/k0sproject/apiserver-network-proxy-agent
version: 0.0.33-k0s
kubeproxy:
image: registry.k8s.io/kube-proxy
version: v1.26.2
kuberouter:
cni:
image: docker.io/cloudnativelabs/kube-router
version: v1.5.1
cniInstaller:
image: quay.io/k0sproject/cni-node
version: 1.1.1-k0s.0
metricsserver:
image: registry.k8s.io/metrics-server/metrics-server
version: v0.6.2
pushgateway:
image: quay.io/k0sproject/pushgateway-ttl
version: edge@sha256:7031f6bf6c957e2fdb496161fe3bea0a5bde3de800deeba7b2155187196ecbd9
installConfig:
users:
etcdUser: etcd
kineUser: kube-apiserver
konnectivityUser: konnectivity-server
kubeAPIserverUser: kube-apiserver
kubeSchedulerUser: kube-scheduler
konnectivity:
adminPort: 8133
agentPort: 8132
network:
calico:
mode: bird
overlay: Never
clusterDomain: cluster.local
dualStack: { }
kubeProxy:
iptables:
masqueradeAll: true
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
minSyncPeriod: 0s
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
metricsBindAddress: 0.0.0.0:10249
mode: iptables
kuberouter: null
nodeLocalLoadBalancing:
envoyProxy:
apiServerBindPort: 7443
image:
image: docker.io/envoyproxy/envoy-distroless
version: v1.24.1
konnectivityServerBindPort: 7132
type: EnvoyProxy
podCIDR: 10.3.0.0/16
provider: calico
serviceCIDR: 10.152.184.0/24
scheduler: { }
storage:
type: etcd
telemetry:
enabled: true
(extra args related to oidc were removed)