k3d icon indicating copy to clipboard operation
k3d copied to clipboard

[BUG] k3s refuses to start with rootless Podman

Open Cubxity opened this issue 2 years ago • 6 comments

What did you do

  • Set up Rootless Podman and Podman Docker compatibility
  • Create a Fedora Toolbox container
  • Install k3d inside the container
  • Run k3d cluster create

What did you expect to happen

The cluster should be created successfully.

Screenshots or terminal output

The k3s container kept bootlooping.

time="2022-08-07T10:48:55Z" level=info msg="Starting k3s v1.23.8+k3s1 (53f2d4e7)"
time="2022-08-07T10:48:55Z" level=info msg="Configuring sqlite3 database connection pooling: maxIdleConns=2, maxOpenConns=0, connMaxLifetime=0s"
time="2022-08-07T10:48:55Z" level=info msg="Configuring database table schema and indexes, this may take a moment..."
time="2022-08-07T10:48:55Z" level=info msg="Database tables and indexes are up to date"
time="2022-08-07T10:48:55Z" level=info msg="Kine available at unix://kine.sock"
time="2022-08-07T10:48:55Z" level=info msg="Reconciling bootstrap data between datastore and disk"
time="2022-08-07T10:48:55Z" level=info msg="Tunnel server egress proxy mode: agent"
time="2022-08-07T10:48:55Z" level=info msg="Tunnel server egress proxy waiting for runtime core to become available"
time="2022-08-07T10:48:55Z" level=info msg="Running kube-apiserver --advertise-port=6443 --allow-privileged=true --anonymous-auth=false --api-audiences=https://kubernetes.default.svc.cluster.local,k3s --authorization-mode=Node,RBAC --bind-address=127.0.0.1 --cert-dir=/var/lib/rancher/k3s/server/tls/temporary-certs --client-ca-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --egress-selector-config-file=/var/lib/rancher/k3s/server/etc/egress-selector-config.yaml --enable-admission-plugins=NodeRestriction --enable-aggregator-routing=true --etcd-servers=unix://kine.sock --feature-gates=JobTrackingWithFinalizers=true --insecure-port=0 --kubelet-certificate-authority=/var/lib/rancher/k3s/server/tls/server-ca.crt --kubelet-client-certificate=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.crt --kubelet-client-key=/var/lib/rancher/k3s/server/tls/client-kube-apiserver.key --profiling=false --proxy-client-cert-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt --proxy-client-key-file=/var/lib/rancher/k3s/server/tls/client-auth-proxy.key --requestheader-allowed-names=system:auth-proxy --requestheader-client-ca-file=/var/lib/rancher/k3s/server/tls/request-header-ca.crt --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --secure-port=6444 --service-account-issuer=https://kubernetes.default.svc.cluster.local --service-account-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-account-signing-key-file=/var/lib/rancher/k3s/server/tls/service.key --service-cluster-ip-range=10.43.0.0/16 --service-node-port-range=30000-32767 --storage-backend=etcd3 --tls-cert-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt --tls-private-key-file=/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
time="2022-08-07T10:48:55Z" level=info msg="Running kube-scheduler --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --bind-address=127.0.0.1 --kubeconfig=/var/lib/rancher/k3s/server/cred/scheduler.kubeconfig --leader-elect=false --profiling=false --secure-port=10259"
time="2022-08-07T10:48:55Z" level=info msg="Waiting for API server to become available"
Flag --insecure-port has been deprecated, This flag has no effect now and will be removed in v1.24.
I0807 10:48:55.923496       2 server.go:593] external host was not specified, using 10.89.2.2
time="2022-08-07T10:48:55Z" level=info msg="Running kube-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --bind-address=127.0.0.1 --cluster-cidr=10.42.0.0/16 --cluster-signing-kube-apiserver-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kube-apiserver-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-client-cert-file=/var/lib/rancher/k3s/server/tls/client-ca.crt --cluster-signing-kubelet-client-key-file=/var/lib/rancher/k3s/server/tls/client-ca.key --cluster-signing-kubelet-serving-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-kubelet-serving-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --cluster-signing-legacy-unknown-cert-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --cluster-signing-legacy-unknown-key-file=/var/lib/rancher/k3s/server/tls/server-ca.key --configure-cloud-routes=false --controllers=*,-service,-route,-cloud-node-lifecycle --feature-gates=JobTrackingWithFinalizers=true --kubeconfig=/var/lib/rancher/k3s/server/cred/controller.kubeconfig --leader-elect=false --profiling=false --root-ca-file=/var/lib/rancher/k3s/server/tls/server-ca.crt --secure-port=10257 --service-account-private-key-file=/var/lib/rancher/k3s/server/tls/service.key --use-service-account-credentials=true"
I0807 10:48:55.924472       2 server.go:186] Version: v1.23.8+k3s1
time="2022-08-07T10:48:55Z" level=info msg="Running cloud-controller-manager --allocate-node-cidrs=true --authentication-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --authorization-kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --bind-address=127.0.0.1 --cloud-provider=k3s --cluster-cidr=10.42.0.0/16 --configure-cloud-routes=false --kubeconfig=/var/lib/rancher/k3s/server/cred/cloud-controller.kubeconfig --leader-elect=false --node-status-update-frequency=1m0s --port=0 --profiling=false"
time="2022-08-07T10:48:55Z" level=info msg="Node token is available at /var/lib/rancher/k3s/server/token"
time="2022-08-07T10:48:55Z" level=info msg="To join node to cluster: k3s agent -s https://10.89.2.2:6443 -t ${NODE_TOKEN}"
time="2022-08-07T10:48:55Z" level=info msg="Wrote kubeconfig /output/kubeconfig.yaml"
time="2022-08-07T10:48:55Z" level=info msg="Run: k3s kubectl"
I0807 10:48:55.927993       2 shared_informer.go:240] Waiting for caches to sync for node_authorizer
I0807 10:48:55.928630       2 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0807 10:48:55.928641       2 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0807 10:48:55.929298       2 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0807 10:48:55.929308       2 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0807 10:48:55.943886       2 genericapiserver.go:538] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
time="2022-08-07T10:48:55Z" level=info msg="certificate CN=k3d-k3s-default-server-0 signed by CN=k3s-server-ca@1659869264: notBefore=2022-08-07 10:47:44 +0000 UTC notAfter=2023-08-07 10:48:55 +0000 UTC"
I0807 10:48:55.944439       2 instance.go:274] Using reconciler: lease
time="2022-08-07T10:48:55Z" level=info msg="certificate CN=system:node:k3d-k3s-default-server-0,O=system:nodes signed by CN=k3s-client-ca@1659869264: notBefore=2022-08-07 10:47:44 +0000 UTC notAfter=2023-08-07 10:48:55 +0000 UTC"
time="2022-08-07T10:48:55Z" level=info msg="Module overlay was already loaded"
time="2022-08-07T10:48:55Z" level=info msg="Module nf_conntrack was already loaded"
time="2022-08-07T10:48:55Z" level=warning msg="Failed to load kernel module br_netfilter with modprobe"
time="2022-08-07T10:48:55Z" level=warning msg="Failed to load kernel module iptable_nat with modprobe"
time="2022-08-07T10:48:55Z" level=info msg="Set sysctl 'net/bridge/bridge-nf-call-iptables' to 1"
time="2022-08-07T10:48:55Z" level=error msg="Failed to set sysctl: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory"
time="2022-08-07T10:48:55Z" level=info msg="Set sysctl 'net/netfilter/nf_conntrack_max' to 196608"
time="2022-08-07T10:48:55Z" level=error msg="Failed to set sysctl: open /proc/sys/net/netfilter/nf_conntrack_max: permission denied"
time="2022-08-07T10:48:56Z" level=info msg="Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log"
time="2022-08-07T10:48:56Z" level=info msg="Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd"
W0807 10:48:56.161886       2 genericapiserver.go:538] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
W0807 10:48:56.163349       2 genericapiserver.go:538] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
W0807 10:48:56.176585       2 genericapiserver.go:538] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
W0807 10:48:56.178112       2 genericapiserver.go:538] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
W0807 10:48:56.184425       2 genericapiserver.go:538] Skipping API networking.k8s.io/v1beta1 because it has no resources.
W0807 10:48:56.187309       2 genericapiserver.go:538] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0807 10:48:56.193397       2 genericapiserver.go:538] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
W0807 10:48:56.193410       2 genericapiserver.go:538] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0807 10:48:56.194862       2 genericapiserver.go:538] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
W0807 10:48:56.194872       2 genericapiserver.go:538] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0807 10:48:56.198761       2 genericapiserver.go:538] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0807 10:48:56.202805       2 genericapiserver.go:538] Skipping API flowcontrol.apiserver.k8s.io/v1alpha1 because it has no resources.
W0807 10:48:56.206777       2 genericapiserver.go:538] Skipping API apps/v1beta2 because it has no resources.
W0807 10:48:56.206788       2 genericapiserver.go:538] Skipping API apps/v1beta1 because it has no resources.
W0807 10:48:56.208622       2 genericapiserver.go:538] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
I0807 10:48:56.212146       2 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0807 10:48:56.212157       2 plugins.go:161] Loaded 11 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
W0807 10:48:56.226287       2 genericapiserver.go:538] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
I0807 10:48:56.914544       2 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
I0807 10:48:56.914572       2 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I0807 10:48:56.914724       2 dynamic_serving_content.go:131] "Starting controller" name="serving-cert::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.crt::/var/lib/rancher/k3s/server/tls/serving-kube-apiserver.key"
I0807 10:48:56.914737       2 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0807 10:48:56.914726       2 secure_serving.go:266] Serving securely on 127.0.0.1:6444
I0807 10:48:56.914772       2 dynamic_serving_content.go:131] "Starting controller" name="aggregator-proxy-cert::/var/lib/rancher/k3s/server/tls/client-auth-proxy.crt::/var/lib/rancher/k3s/server/tls/client-auth-proxy.key"
I0807 10:48:56.914784       2 apiservice_controller.go:97] Starting APIServiceRegistrationController
I0807 10:48:56.914789       2 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0807 10:48:56.914873       2 available_controller.go:491] Starting AvailableConditionController
I0807 10:48:56.914883       2 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0807 10:48:56.914908       2 customresource_discovery_controller.go:209] Starting DiscoveryController
I0807 10:48:56.915027       2 autoregister_controller.go:141] Starting autoregister controller
I0807 10:48:56.915035       2 cache.go:32] Waiting for caches to sync for autoregister controller
I0807 10:48:56.915047       2 controller.go:83] Starting OpenAPI AggregationController
I0807 10:48:56.915082       2 crdregistration_controller.go:111] Starting crd-autoregister controller
I0807 10:48:56.915086       2 shared_informer.go:240] Waiting for caches to sync for crd-autoregister
I0807 10:48:56.915188       2 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0807 10:48:56.915195       2 shared_informer.go:240] Waiting for caches to sync for cluster_authentication_trust_controller
I0807 10:48:56.915424       2 apf_controller.go:317] Starting API Priority and Fairness config controller
I0807 10:48:56.915549       2 controller.go:85] Starting OpenAPI controller
I0807 10:48:56.915564       2 naming_controller.go:291] Starting NamingConditionController
I0807 10:48:56.915578       2 establishing_controller.go:76] Starting EstablishingController
I0807 10:48:56.915586       2 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
I0807 10:48:56.915594       2 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0807 10:48:56.915602       2 crd_finalizer.go:266] Starting CRDFinalizer
I0807 10:48:56.915621       2 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/server/tls/client-ca.crt"
I0807 10:48:56.918598       2 dynamic_cafile_content.go:156] "Starting controller" name="request-header::/var/lib/rancher/k3s/server/tls/request-header-ca.crt"
E0807 10:48:56.962381       2 controller.go:161] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
time="2022-08-07T10:48:57Z" level=info msg="Containerd is now running"
time="2022-08-07T10:48:57Z" level=info msg="Connecting to proxy" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2022-08-07T10:48:57Z" level=info msg="Running kubelet --address=0.0.0.0 --anonymous-auth=false --authentication-token-webhook=true --authorization-mode=Webhook --cgroup-driver=cgroupfs --client-ca-file=/var/lib/rancher/k3s/agent/client-ca.crt --cloud-provider=external --cluster-dns=10.43.0.10 --cluster-domain=cluster.local --cni-bin-dir=/bin --cni-conf-dir=/var/lib/rancher/k3s/agent/etc/cni/net.d --container-runtime=remote --container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock --containerd=/run/k3s/containerd/containerd.sock --eviction-hard=imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim=imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --feature-gates=DevicePlugins=false --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubelet.kubeconfig --kubelet-cgroups=/k3s --node-labels= --pod-manifest-path=/var/lib/rancher/k3s/agent/pod-manifests --read-only-port=0 --resolv-conf=/etc/resolv.conf --runtime-cgroups=/k3s --serialize-image-pulls=false --tls-cert-file=/var/lib/rancher/k3s/agent/serving-kubelet.crt --tls-private-key-file=/var/lib/rancher/k3s/agent/serving-kubelet.key"
I0807 10:48:57.014937       2 cache.go:39] Caches are synced for AvailableConditionController controller
I0807 10:48:57.015013       2 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0807 10:48:57.015187       2 shared_informer.go:247] Caches are synced for crd-autoregister 
I0807 10:48:57.015213       2 cache.go:39] Caches are synced for autoregister controller
I0807 10:48:57.015496       2 apf_controller.go:322] Running API Priority and Fairness config worker
time="2022-08-07T10:48:57Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
I0807 10:48:57.017412       2 shared_informer.go:247] Caches are synced for cluster_authentication_trust_controller 
I0807 10:48:57.028049       2 shared_informer.go:247] Caches are synced for node_authorizer 
time="2022-08-07T10:48:57Z" level=info msg="Waiting to retrieve kube-proxy configuration; server is not ready: https://127.0.0.1:6443/v1-k3s/readyz: 500 Internal Server Error"
I0807 10:48:57.915584       2 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0807 10:48:57.924933       2 storage_scheduling.go:109] all system priority classes are created successfully or already exist.
W0807 10:48:58.048919       2 handler_proxy.go:104] no RequestInfo found in the context
E0807 10:48:58.048947       2 controller.go:116] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
I0807 10:48:58.048952       2 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
time="2022-08-07T10:48:58Z" level=info msg="Waiting for cloud-controller-manager privileges to become available"
time="2022-08-07T10:48:58Z" level=info msg="Kube API server is now running"
time="2022-08-07T10:48:58Z" level=info msg="ETCD server is now running"
time="2022-08-07T10:48:58Z" level=info msg="k3s is up and running"
time="2022-08-07T10:48:58Z" level=info msg="Applying CRD addons.k3s.cattle.io"
time="2022-08-07T10:48:58Z" level=info msg="Applying CRD helmcharts.helm.cattle.io"
time="2022-08-07T10:48:59Z" level=info msg="Applying CRD helmchartconfigs.helm.cattle.io"
time="2022-08-07T10:48:59Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-10.19.300.tgz"
time="2022-08-07T10:48:59Z" level=info msg="Writing static file: /var/lib/rancher/k3s/server/static/charts/traefik-crd-10.19.300.tgz"
Flag --cloud-provider has been deprecated, will be removed in 1.24 or later, in favor of removing cloud provider code from Kubelet.
Flag --cni-bin-dir has been deprecated, will be removed along with dockershim.
Flag --cni-conf-dir has been deprecated, will be removed along with dockershim.
Flag --containerd has been deprecated, This is a cadvisor flag that was mistakenly registered with the Kubelet. Due to legacy concerns, it will follow the standard CLI deprecation timeline before being removed.
I0807 10:48:59.040752       2 server.go:442] "Kubelet version" kubeletVersion="v1.23.8+k3s1"
time="2022-08-07T10:48:59Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml"
time="2022-08-07T10:48:59Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml"
time="2022-08-07T10:48:59Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/ccm.yaml"
time="2022-08-07T10:48:59Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/local-storage.yaml"
time="2022-08-07T10:48:59Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml"
time="2022-08-07T10:48:59Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml"
time="2022-08-07T10:48:59Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml"
time="2022-08-07T10:48:59Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/coredns.yaml"
time="2022-08-07T10:48:59Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml"
W0807 10:48:59.042206       2 manager.go:159] Cannot detect current cgroup on cgroup v2
time="2022-08-07T10:48:59Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml"
I0807 10:48:59.042288       2 dynamic_cafile_content.go:156] "Starting controller" name="client-ca-bundle::/var/lib/rancher/k3s/agent/client-ca.crt"
time="2022-08-07T10:48:59Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/rolebindings.yaml"
time="2022-08-07T10:48:59Z" level=info msg="Writing manifest: /var/lib/rancher/k3s/server/manifests/traefik.yaml"
time="2022-08-07T10:48:59Z" level=info msg="Stopped tunnel to 127.0.0.1:6443"
time="2022-08-07T10:48:59Z" level=info msg="Connecting to proxy" url="wss://10.89.2.2:6443/v1-k3s/connect"
time="2022-08-07T10:48:59Z" level=info msg="Proxy done" err="context canceled" url="wss://127.0.0.1:6443/v1-k3s/connect"
time="2022-08-07T10:48:59Z" level=info msg="error in remotedialer server [400]: websocket: close 1006 (abnormal closure): unexpected EOF"
time="2022-08-07T10:48:59Z" level=info msg="Handling backend connection request [k3d-k3s-default-server-0]"
E0807 10:48:59.148030       2 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0807 10:48:59.154146       2 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2022-08-07T10:48:59Z" level=info msg="Starting k3s.cattle.io/v1, Kind=Addon controller"
time="2022-08-07T10:48:59Z" level=info msg="Creating deploy event broadcaster"
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"a243d5cf-e420-409f-828a-aee7ff73d885\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"270\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
E0807 10:48:59.162473       2 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0807 10:48:59.163306       2 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0807 10:48:59.169264       2 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0807 10:48:59.170900       2 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2022-08-07T10:48:59Z" level=info msg="Creating svccontroller event broadcaster"
time="2022-08-07T10:48:59Z" level=info msg="Cluster dns configmap already exists"
time="2022-08-07T10:48:59Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
I0807 10:48:59.185407       2 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"ccm\", UID:\"a243d5cf-e420-409f-828a-aee7ff73d885\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"270\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/ccm.yaml\""
I0807 10:48:59.188099       2 controller.go:611] quota admission added evaluator for: addons.k3s.cattle.io
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"804e3d1a-9594-4011-a9ec-7b9d075e1cc9\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"283\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
I0807 10:48:59.214688       2 controller.go:611] quota admission added evaluator for: deployments.apps
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"coredns\", UID:\"804e3d1a-9594-4011-a9ec-7b9d075e1cc9\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"283\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/coredns.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"local-storage\", UID:\"41f69f6b-7842-45f9-9f96-deb018f1a983\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"295\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"local-storage\", UID:\"41f69f6b-7842-45f9-9f96-deb018f1a983\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"295\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/local-storage.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"38016400-30ac-4082-ad3b-457d827354f1\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"300\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"aggregated-metrics-reader\", UID:\"38016400-30ac-4082-ad3b-457d827354f1\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"300\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/aggregated-metrics-reader.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"14628294-ba78-496c-ab14-4d6abeafafd0\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"305\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-delegator\", UID:\"14628294-ba78-496c-ab14-4d6abeafafd0\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"305\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-delegator.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-reader\", UID:\"1b5f6164-7c5a-4624-979a-05f2cc177b63\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"309\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"auth-reader\", UID:\"1b5f6164-7c5a-4624-979a-05f2cc177b63\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"309\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/auth-reader.yaml\""
I0807 10:48:59.262379       2 serving.go:355] Generated self-signed cert in-memory
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-apiservice\", UID:\"d27a9c45-0652-4f7f-b37c-c363526854d4\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"315\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-apiservice\", UID:\"d27a9c45-0652-4f7f-b37c-c363526854d4\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"315\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-apiservice.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-deployment\", UID:\"8b5f40ed-5720-4dfc-b194-a07ca2cc688c\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"323\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-deployment\", UID:\"8b5f40ed-5720-4dfc-b194-a07ca2cc688c\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"323\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-deployment.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChart controller"
time="2022-08-07T10:48:59Z" level=info msg="Starting helm.cattle.io/v1, Kind=HelmChartConfig controller"
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"fd94ab9e-9442-425c-9867-20f0f0cfc141\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"366\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd"
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"67347a62-424e-4e73-a21c-f20f3821e039\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"364\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik"
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-service\", UID:\"db683711-5422-4d5d-b106-9dbbe8a1a062\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"330\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml\""
E0807 10:48:59.287734       2 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"metrics-server-service\", UID:\"db683711-5422-4d5d-b106-9dbbe8a1a062\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"330\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/metrics-server-service.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"resource-reader\", UID:\"edc4644c-630a-451c-b6b7-7554a8a17af7\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"336\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml\""
E0807 10:48:59.308455       2 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2022-08-07T10:48:59Z" level=info msg="Starting apps/v1, Kind=Deployment controller"
time="2022-08-07T10:48:59Z" level=info msg="Starting apps/v1, Kind=DaemonSet controller"
E0807 10:48:59.314220       2 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0807 10:48:59.316710       2 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2022-08-07T10:48:59Z" level=info msg="Starting rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding controller"
I0807 10:48:59.317733       2 controller.go:611] quota admission added evaluator for: helmcharts.helm.cattle.io
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"resource-reader\", UID:\"edc4644c-630a-451c-b6b7-7554a8a17af7\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"336\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/metrics-server/resource-reader.yaml\""
E0807 10:48:59.323017       2 memcache.go:196] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
E0807 10:48:59.324126       2 memcache.go:101] couldn't get resource list for metrics.k8s.io/v1beta1: the server is currently unable to handle the request
time="2022-08-07T10:48:59Z" level=info msg="Starting batch/v1, Kind=Job controller"
time="2022-08-07T10:48:59Z" level=info msg="Starting /v1, Kind=Node controller"
time="2022-08-07T10:48:59Z" level=info msg="Starting /v1, Kind=ConfigMap controller"
time="2022-08-07T10:48:59Z" level=info msg="Starting /v1, Kind=Pod controller"
time="2022-08-07T10:48:59Z" level=info msg="Starting /v1, Kind=Service controller"
time="2022-08-07T10:48:59Z" level=info msg="Starting /v1, Kind=ServiceAccount controller"
time="2022-08-07T10:48:59Z" level=info msg="Starting /v1, Kind=Endpoints controller"
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"67347a62-424e-4e73-a21c-f20f3821e039\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"364\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik"
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"HelmChart\", Namespace:\"kube-system\", Name:\"traefik-crd\", UID:\"fd94ab9e-9442-425c-9867-20f0f0cfc141\", APIVersion:\"helm.cattle.io/v1\", ResourceVersion:\"366\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyJob' Applying HelmChart using Job kube-system/helm-install-traefik-crd"
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"rolebindings\", UID:\"bd4632f0-46ba-4d9f-8987-4b950b4c06e7\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"343\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"rolebindings\", UID:\"bd4632f0-46ba-4d9f-8987-4b950b4c06e7\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"343\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/rolebindings.yaml\""
I0807 10:48:59.489767       2 serving.go:355] Generated self-signed cert in-memory
I0807 10:48:59.635667       2 controllermanager.go:144] Version: v1.23.8+k3s1
I0807 10:48:59.637655       2 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0807 10:48:59.637667       2 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0807 10:48:59.637684       2 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0807 10:48:59.637688       2 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0807 10:48:59.637691       2 secure_serving.go:200] Serving securely on 127.0.0.1:10258
I0807 10:48:59.637728       2 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0807 10:48:59.637767       2 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0807 10:48:59.637772       2 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"4d83762e-3d15-41ef-a589-ab3cd8c2b96a\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"382\", FieldPath:\"\"}): type: 'Normal' reason: 'ApplyingManifest' Applying manifest at \"/var/lib/rancher/k3s/server/manifests/traefik.yaml\""
time="2022-08-07T10:48:59Z" level=info msg="Event(v1.ObjectReference{Kind:\"Addon\", Namespace:\"kube-system\", Name:\"traefik\", UID:\"4d83762e-3d15-41ef-a589-ab3cd8c2b96a\", APIVersion:\"k3s.cattle.io/v1\", ResourceVersion:\"382\", FieldPath:\"\"}): type: 'Normal' reason: 'AppliedManifest' Applied manifest at \"/var/lib/rancher/k3s/server/manifests/traefik.yaml\""
I0807 10:48:59.738360       2 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I0807 10:48:59.738392       2 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
I0807 10:48:59.738409       2 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I0807 10:48:59.740643       2 controllermanager.go:196] Version: v1.23.8+k3s1
I0807 10:48:59.742562       2 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
I0807 10:48:59.742574       2 shared_informer.go:240] Waiting for caches to sync for RequestHeaderAuthRequestController
I0807 10:48:59.742591       2 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
I0807 10:48:59.742595       2 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0807 10:48:59.742629       2 configmap_cafile_content.go:201] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
I0807 10:48:59.742636       2 shared_informer.go:240] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
I0807 10:48:59.742639       2 tlsconfig.go:240] "Starting DynamicServingCertificateController"
I0807 10:48:59.742630       2 secure_serving.go:200] Serving securely on 127.0.0.1:10257
I0807 10:48:59.842841       2 shared_informer.go:247] Caches are synced for RequestHeaderAuthRequestController 
I0807 10:48:59.842903       2 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file 
I0807 10:48:59.843520       2 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
time="2022-08-07T10:49:00Z" level=info msg="Starting /v1, Kind=Secret controller"
time="2022-08-07T10:49:00Z" level=info msg="Updating TLS secret for kube-system/k3s-serving (count: 11): map[listener.cattle.io/cn-0.0.0.0:0.0.0.0 listener.cattle.io/cn-10.43.0.1:10.43.0.1 listener.cattle.io/cn-10.89.2.2:10.89.2.2 listener.cattle.io/cn-127.0.0.1:127.0.0.1 listener.cattle.io/cn-__1-f16284:::1 listener.cattle.io/cn-k3d-k3s-default-server-0:k3d-k3s-default-server-0 listener.cattle.io/cn-kubernetes:kubernetes listener.cattle.io/cn-kubernetes.default:kubernetes.default listener.cattle.io/cn-kubernetes.default.svc:kubernetes.default.svc listener.cattle.io/cn-kubernetes.default.svc.cluster.local:kubernetes.default.svc.cluster.local listener.cattle.io/cn-localhost:localhost listener.cattle.io/fingerprint:SHA1=E3587ECCC2E512405CD24661DF97A5DE0A4537BB]"
time="2022-08-07T10:49:00Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
I0807 10:49:01.042011       2 request.go:665] Waited for 1.049556483s due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:6444/apis/node.k8s.io/v1beta1
time="2022-08-07T10:49:01Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
E0807 10:49:01.645321       2 controllermanager.go:470] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0807 10:49:01.646281       2 node_controller.go:116] Sending events to api server.
I0807 10:49:01.646449       2 controllermanager.go:298] Started "cloud-node"
I0807 10:49:01.646535       2 node_controller.go:155] Waiting for informer caches to sync
I0807 10:49:01.647169       2 node_lifecycle_controller.go:77] Sending events to api server
I0807 10:49:01.647264       2 controllermanager.go:298] Started "cloud-node-lifecycle"
E0807 10:49:01.749508       2 controllermanager.go:479] unable to get all supported resources from server: unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
I0807 10:49:01.751003       2 shared_informer.go:240] Waiting for caches to sync for tokens
I0807 10:49:01.765579       2 controllermanager.go:605] Started "deployment"
I0807 10:49:01.765731       2 deployment_controller.go:153] "Starting controller" controller="deployment"
I0807 10:49:01.765761       2 shared_informer.go:240] Waiting for caches to sync for deployment
I0807 10:49:01.772752       2 controllermanager.go:605] Started "replicaset"
I0807 10:49:01.772798       2 replica_set.go:186] Starting replicaset controller
I0807 10:49:01.773399       2 shared_informer.go:240] Waiting for caches to sync for ReplicaSet
I0807 10:49:01.780222       2 controllermanager.go:605] Started "cronjob"
I0807 10:49:01.780371       2 cronjob_controllerv2.go:132] "Starting cronjob controller v2"
I0807 10:49:01.780404       2 shared_informer.go:240] Waiting for caches to sync for cronjob
I0807 10:49:01.787239       2 controllermanager.go:605] Started "persistentvolume-expander"
I0807 10:49:01.787391       2 expand_controller.go:342] Starting expand controller
I0807 10:49:01.787422       2 shared_informer.go:240] Waiting for caches to sync for expand
I0807 10:49:01.794739       2 controllermanager.go:605] Started "root-ca-cert-publisher"
I0807 10:49:01.794867       2 publisher.go:107] Starting root CA certificate configmap publisher
I0807 10:49:01.794899       2 shared_informer.go:240] Waiting for caches to sync for crt configmap
I0807 10:49:01.820818       2 controllermanager.go:605] Started "horizontalpodautoscaling"
I0807 10:49:01.820938       2 horizontal.go:168] Starting HPA controller
I0807 10:49:01.821421       2 shared_informer.go:240] Waiting for caches to sync for HPA
I0807 10:49:01.836055       2 controllermanager.go:605] Started "disruption"
I0807 10:49:01.836153       2 disruption.go:363] Starting disruption controller
I0807 10:49:01.836184       2 shared_informer.go:240] Waiting for caches to sync for disruption
I0807 10:49:01.843009       2 node_lifecycle_controller.go:377] Sending events to api server.
I0807 10:49:01.843378       2 taint_manager.go:163] "Sending events to api server"
I0807 10:49:01.843869       2 node_lifecycle_controller.go:505] Controller will reconcile labels.
I0807 10:49:01.843957       2 controllermanager.go:605] Started "nodelifecycle"
I0807 10:49:01.844151       2 node_lifecycle_controller.go:539] Starting node controller
I0807 10:49:01.844183       2 shared_informer.go:240] Waiting for caches to sync for taint
I0807 10:49:01.851942       2 shared_informer.go:247] Caches are synced for tokens 
I0807 10:49:01.852062       2 controllermanager.go:605] Started "persistentvolume-binder"
I0807 10:49:01.852432       2 pv_controller_base.go:310] Starting persistent volume controller
I0807 10:49:01.852453       2 shared_informer.go:240] Waiting for caches to sync for persistent volume
I0807 10:49:01.859549       2 controllermanager.go:605] Started "endpoint"
I0807 10:49:01.859758       2 endpoints_controller.go:193] Starting endpoint controller
I0807 10:49:01.860320       2 shared_informer.go:240] Waiting for caches to sync for endpoint
I0807 10:49:01.867200       2 controllermanager.go:605] Started "endpointslicemirroring"
I0807 10:49:01.867411       2 endpointslicemirroring_controller.go:212] Starting EndpointSliceMirroring controller
I0807 10:49:01.867442       2 shared_informer.go:240] Waiting for caches to sync for endpoint_slice_mirroring
I0807 10:49:01.874592       2 controllermanager.go:605] Started "serviceaccount"
I0807 10:49:01.874814       2 serviceaccounts_controller.go:117] Starting service account controller
I0807 10:49:01.875294       2 shared_informer.go:240] Waiting for caches to sync for service account
I0807 10:49:01.880309       2 controllermanager.go:605] Started "daemonset"
I0807 10:49:01.880353       2 daemon_controller.go:284] Starting daemon sets controller
I0807 10:49:01.880373       2 shared_informer.go:240] Waiting for caches to sync for daemon sets
I0807 10:49:01.885367       2 controllermanager.go:605] Started "endpointslice"
I0807 10:49:01.885410       2 endpointslice_controller.go:257] Starting endpoint slice controller
I0807 10:49:01.885917       2 shared_informer.go:240] Waiting for caches to sync for endpoint_slice
I0807 10:49:01.899772       2 garbagecollector.go:146] Starting garbage collector controller
I0807 10:49:01.899811       2 shared_informer.go:240] Waiting for caches to sync for garbage collector
I0807 10:49:01.899846       2 graph_builder.go:289] GraphBuilder running
I0807 10:49:01.900039       2 controllermanager.go:605] Started "garbagecollector"
I0807 10:49:01.906064       2 controllermanager.go:605] Started "ttl"
W0807 10:49:01.906100       2 controllermanager.go:570] "service" is disabled
I0807 10:49:01.906167       2 ttl_controller.go:121] Starting TTL controller
I0807 10:49:01.906188       2 shared_informer.go:240] Waiting for caches to sync for TTL
I0807 10:49:01.918331       2 controllermanager.go:605] Started "attachdetach"
I0807 10:49:01.918448       2 attach_detach_controller.go:328] Starting attach detach controller
I0807 10:49:01.918469       2 shared_informer.go:240] Waiting for caches to sync for attach detach
I0807 10:49:01.922959       2 controllermanager.go:605] Started "statefulset"
I0807 10:49:01.923009       2 stateful_set.go:147] Starting stateful set controller
I0807 10:49:01.923026       2 shared_informer.go:240] Waiting for caches to sync for stateful set
I0807 10:49:01.926429       2 node_ipam_controller.go:91] Sending events to api server.
W0807 10:49:01.940027       2 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
time="2022-08-07T10:49:02Z" level=info msg="Running kube-proxy --cluster-cidr=10.42.0.0/16 --conntrack-max-per-core=0 --conntrack-tcp-timeout-close-wait=0s --conntrack-tcp-timeout-established=0s --healthz-bind-address=127.0.0.1 --hostname-override=k3d-k3s-default-server-0 --kubeconfig=/var/lib/rancher/k3s/agent/kubeproxy.kubeconfig --proxy-mode=iptables"
I0807 10:49:02.039049       2 server.go:225] "Warning, all flags other than --config, --write-config-to, and --cleanup are deprecated, please begin using a config file ASAP"
E0807 10:49:02.040994       2 proxier.go:643] "Failed to read builtin modules file, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" err="open /lib/modules/5.18.15-200.fc36.x86_64/modules.builtin: no such file or directory" filePath="/lib/modules/5.18.15-200.fc36.x86_64/modules.builtin"
I0807 10:49:02.043114       2 proxier.go:653] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs"
I0807 10:49:02.044787       2 proxier.go:653] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_rr"
I0807 10:49:02.046224       2 proxier.go:653] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_wrr"
I0807 10:49:02.047999       2 proxier.go:653] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="ip_vs_sh"
I0807 10:49:02.049487       2 proxier.go:653] "Failed to load kernel module with modprobe, you can ignore this message when kube-proxy is running inside container without mounting /lib/modules" moduleName="nf_conntrack"
E0807 10:49:02.071366       2 node.go:152] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" not found
time="2022-08-07T10:49:02Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
E0807 10:49:03.161167       2 node.go:152] Failed to retrieve node info: nodes "k3d-k3s-default-server-0" not found
time="2022-08-07T10:49:03Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
W0807 10:49:04.053932       2 fs.go:220] stat failed on /dev/mapper/luks-39e04ece-691f-4ee4-b6b0-ac105dd22b60 with error: no such file or directory
W0807 10:49:04.053955       2 fs.go:220] stat failed on /dev/mapper/luks-a7943b37-a763-4d5e-98ab-6b56394dd80b with error: no such file or directory
E0807 10:49:04.075470       2 info.go:114] Failed to get system UUID: open /etc/machine-id: no such file or directory
W0807 10:49:04.076152       2 info.go:53] Couldn't collect info from any of the files in "/etc/machine-id,/var/lib/dbus/machine-id"
I0807 10:49:04.077379       2 server.go:693] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
I0807 10:49:04.077801       2 container_manager_linux.go:281] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0807 10:49:04.077880       2 container_manager_linux.go:286] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/k3s SystemCgroupsName: KubeletCgroupsName:/k3s ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I0807 10:49:04.077916       2 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I0807 10:49:04.077930       2 container_manager_linux.go:321] "Creating device plugin manager" devicePluginEnabled=false
I0807 10:49:04.077952       2 state_mem.go:36] "Initialized new in-memory state store"
time="2022-08-07T10:49:04Z" level=info msg="Waiting for control-plane node k3d-k3s-default-server-0 startup: nodes \"k3d-k3s-default-server-0\" not found"
I0807 10:49:04.279681       2 server.go:799] "Failed to ApplyOOMScoreAdj" err="write /proc/self/oom_score_adj: permission denied"
I0807 10:49:04.284811       2 kubelet.go:416] "Attempting to sync node with API server"
I0807 10:49:04.284866       2 kubelet.go:278] "Adding static pod path" path="/var/lib/rancher/k3s/agent/pod-manifests"
I0807 10:49:04.284931       2 kubelet.go:289] "Adding apiserver pod source"
I0807 10:49:04.284978       2 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
I0807 10:49:04.286160       2 kuberuntime_manager.go:249] "Container runtime initialized" containerRuntime="containerd" version="v1.5.13-k3s1" apiVersion="v1alpha2"
E0807 10:49:04.286978       2 server.go:1233] "Failed to set rlimit on max file handles" err="operation not permitted"
I0807 10:49:04.287027       2 server.go:1244] "Started kubelet"
W0807 10:49:04.291417       2 fs.go:599] Unable to get btrfs mountpoint IDs: stat failed on /dev/mapper/luks-39e04ece-691f-4ee4-b6b0-ac105dd22b60 with error: no such file or directory
E0807 10:49:04.291486       2 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs"
E0807 10:49:04.291524       2 kubelet.go:1351] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
I0807 10:49:04.292015       2 server.go:150] "Starting to listen" address="0.0.0.0" port=10250
I0807 10:49:04.292750       2 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I0807 10:49:04.294387       2 server.go:410] "Adding debug handlers to kubelet server"
I0807 10:49:04.300153       2 volume_manager.go:291] "Starting Kubelet Volume Manager"
I0807 10:49:04.304382       2 desired_state_of_world_populator.go:147] "Desired state populator starts to run"
I0807 10:49:04.316740       2 cpu_manager.go:213] "Starting CPU manager" policy="none"
I0807 10:49:04.316767       2 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s"
I0807 10:49:04.316794       2 state_mem.go:36] "Initialized new in-memory state store"
I0807 10:49:04.317001       2 state_mem.go:88] "Updated default CPUSet" cpuSet=""
I0807 10:49:04.317018       2 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
I0807 10:49:04.317027       2 policy_none.go:49] "None policy: Start"
I0807 10:49:04.317643       2 memory_manager.go:168] "Starting memorymanager" policy="None"
I0807 10:49:04.317683       2 state_mem.go:35] "Initializing new in-memory state store"
I0807 10:49:04.317941       2 state_mem.go:75] "Updated machine memory state"
W0807 10:49:04.317992       2 fs.go:599] Unable to get btrfs mountpoint IDs: stat failed on /dev/mapper/luks-39e04ece-691f-4ee4-b6b0-ac105dd22b60 with error: no such file or directory
E0807 10:49:04.319455       2 container_manager_linux.go:457] "Updating kernel flag failed (Hint: enable KubeletInUserNamespace feature flag to ignore the error)" err="open /proc/sys/vm/overcommit_memory: permission denied" flag="vm/overcommit_memory"
E0807 10:49:04.319606       2 container_manager_linux.go:457] "Updating kernel flag failed (Hint: enable KubeletInUserNamespace feature flag to ignore the error)" err="open /proc/sys/kernel/panic: permission denied" flag="kernel/panic"
E0807 10:49:04.319705       2 container_manager_linux.go:457] "Updating kernel flag failed (Hint: enable KubeletInUserNamespace feature flag to ignore the error)" err="open /proc/sys/kernel/panic_on_oops: permission denied" flag="kernel/panic_on_oops"
E0807 10:49:04.319856       2 kubelet.go:1431] "Failed to start ContainerManager" err="[open /proc/sys/vm/overcommit_memory: permission denied, open /proc/sys/kernel/panic: permission denied, open /proc/sys/kernel/panic_on_oops: permission denied]"

-- k3s container crashed --

Which OS & Architecture

arch: amd64
cgroupdriver: systemd
cgroupversion: "2"
endpoint: /run/user/1000/podman/podman.sock
filesystem: btrfs
name: docker
os: fedora
ostype: linux
version: 4.1.1

Which version of k3d

k3d version v5.4.4
k3s version v1.23.8-k3s1 (default)

Which version of docker

Client:       Podman Engine
Version:      4.1.1
API Version:  4.1.1
Go Version:   go1.18.4
Built:        Fri Jul 22 21:05:59 2022
OS/Arch:      linux/amd64
host:
  arch: amd64
  buildahVersion: 1.26.1
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.0-2.fc36.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.0, commit: '
  cpuUtilization:
    idlePercent: 70.86
    systemPercent: 6.09
    userPercent: 23.06
  cpus: 6
  distribution:
    distribution: fedora
    variant: workstation
    version: "36"
  eventLogger: journald
  hostname: fedora
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.18.15-200.fc36.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 537370624
  memTotal: 16704339968
  networkBackend: cni
  ociRuntime:
    name: crun
    package: crun-1.5-1.fc36.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.5
      commit: 54ebb8ca8bf7e6ddae2eb919f5b82d1d96863dea
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-0.2.beta.0.fc36.x86_64
    version: |-
      slirp4netns version 1.2.0-beta.0
      commit: 477db14a24ff1a3de3a705e51ca2c4c1fe3dda64
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.3
  swapFree: 6415708160
  swapTotal: 8589930496
  uptime: 2h 48m 58.32s (Approximately 0.08 days)
plugins:
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/user/.config/containers/storage.conf
  containerStore:
    number: 7
    paused: 0
    running: 1
    stopped: 6
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/user/.local/share/containers/storage
  graphRootAllocated: 676453154816
  graphRootUsed: 276889808896
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 27
  runRoot: /run/user/1000/containers
  volumePath: /home/user/.local/share/containers/storage/volumes
version:
  APIVersion: 4.1.1
  Built: 1658516759
  BuiltTime: Fri Jul 22 21:05:59 2022
  GitCommit: ""
  GoVersion: go1.18.4
  Os: linux
  OsArch: linux/amd64
  Version: 4.1.1

Cubxity avatar Aug 07 '22 10:08 Cubxity

This solution helped me here:

  • https://github.com/k3d-io/k3d/issues/1082#issuecomment-1156832664

almereyda avatar Nov 28 '22 00:11 almereyda

Hello!

I'm facing the same issue on Oracle Linux 8 with CGroupsV2 and rootless podman.

I'm already using CGroupsV2 so no additional changes have been required.

But still the cluster was unable to spin-up. The command above—especially the --k3s-arg '--kubelet-arg=feature-gates=KubeletInUserNamespace=true@server:*' parameter did the trick.

But unfortunately it is not sufficient. The cluster starts up but is unable to launch any pod. The log file is full of entries like this:

time="2023-05-24T06:20:19.651896955+02:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-59b4f5bbd5-l5bc8,Uid:2a286e4b-49bd-408d-9a7a-b78d8a172793,Namespace:kube-system,Attempt:0,} failed, error" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"/var/lib/rancher/k3s/agent/containerd/io.containerd.grpc.v1.cri/sandboxes/741f6346648e0eebfcd8abb7d088909dfe6090425a9ca72f74c83451f3369032/resolv.conf\" to rootfs at \"/etc/resolv.conf\": mount /var/lib/rancher/k3s/agent/containerd/io.containerd.grpc.v1.cri/sandboxes/741f6346648e0eebfcd8abb7d088909dfe6090425a9ca72f74c83451f3369032/resolv.conf:/etc/resolv.conf (via /proc/self/fd/6), flags: 0x5021: operation not permitted: unknown"

I have been facing the same issue with KinD (https://github.com/kubernetes-sigs/kind/issues/3234) and @aojea with @AkihiroSuda traced it back [1], [2] to runc: https://github.com/opencontainers/runc/pull/3805.

Hope this helps.

hadrabap avatar May 24 '23 06:05 hadrabap

The upstream runc PR

  • https://github.com/opencontainers/runc/pull/3805#event-9954301187

has just been merged. When that's available in a release, it could potentially help here.

Now that

  • https://github.com/opencontainers/runc/pull/3805

is merged, it may also work out of the box without KubeletInUserNamespace? Or would that still be needed, and should be applied automatically (similar to the suggestion in https://github.com/k3d-io/k3d/issues/1082#issuecomment-1560509865)?

almereyda avatar Jul 29 '23 21:07 almereyda

The upstream runc PR has been labeled 1.2.0 just now, so we can hopefully expect a release in the following weeks (months?).

From that point on what's a reasonable time expectation to get a resolution in k3s?

Thanks

testinfected avatar Aug 22 '23 14:08 testinfected

it may also work out of the box without KubeletInUserNamespace

Still needed for non-filesystem stuffs such as sysctl https://github.com/kubernetes/kubernetes/blob/aa8cb97f65fe1d24e96eda129337d86109615570/pkg/kubelet/cm/container_manager_linux.go#L435

runc 1.2.0

Likely to take at least a month

AkihiroSuda avatar Aug 22 '23 15:08 AkihiroSuda

I can verify, that the KubeletInUserNamespace flag is still needed for rootless podman, but otherwise works :) It could be enough to add to the guides, until any other resolution.

podman version 4.3.1 k3d version v5.6.0 k3s version v1.27.4-k3s1 (default)

R-Rudolf avatar Mar 29 '24 16:03 R-Rudolf