sealer
sealer copied to clipboard
When using containerd as a container runtime, the systemd cgroup is enabled in the containerd configuration in the cluster image rootfs (SystemdCgroup = true), the cgroupDriver line in the kubelet configuration file will have an additional error configuration (SystemdCgroup = true)
What happen?
When using containerd as a container runtime, the systemd cgroup (SystemdCgroup = true) is enabled in the containerd configuration in the rootfs of the cluster image, and the cgroupDriver line in the kubelet configuration file will be misconfigured (SystemdCgroup = true), resulting in cluster initialization failure。
Relevant log output?
No response
What you expected to happen?
There should not be extra configuration for this (SystemdCgroup = true) error
How to reproduce it (as minimally and precisely as possible)?
No response
Anything else we need to know?
What is the version of Sealer you using?
{"gitVersion":"v0.8.6","gitCommit":"884513e","buildDate":"2022-07-12 02:58:54","goVersion":"go1.16.15","compiler":"gc","platform":"linux/amd64"}
What is your OS environment?
Debian 10
What is the Kernel version?
4.19.0-21-amd64
Other environment you want to tell us?
- Cloud provider or hardware configuration:
- Install tools:
- Others:
@czhfe, could u pls show your contianerd config toml file under /etc/containerd
? grep "SystemdCgroup = true" /etc/containerd/config.toml
@czhfe, could u pls show your contianerd config toml file under
/etc/containerd
?grep "SystemdCgroup = true" /etc/containerd/config.toml
@czhfe , which cloud image you used?
@czhfe , which cloud image you used?
Using my own cloud image that I built
@kakaZhou719 You can enable systemd as a group in containerd (SystemdCgroup = true) to reproduce the problem
@czhfe, I ran successfully using(SystemdCgroup = true)
. Please show us your full config.toml
and your /var/lib/sealer/data/overlay2/{$layerid}/etc/kubeadm.yml
under the cloudimage rootfs, {$layerid}
could be got from sealer inspect .
below is my config.toml: [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true
@czhfe, I ran successfully using
(SystemdCgroup = true)
. Please show us your fullconfig.toml
and your/var/lib/sealer/data/overlay2/{$layerid}/etc/kubeadm.yml
under the cloudimage rootfs,{$layerid}
could be got from sealer inspect .below is my config.toml:
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options] SystemdCgroup = true
- /etc/containerd/dump-containerd.toml
version = 2
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
sandbox_image = "sea.hub:5000/huis/pause:3.2"
[plugins."io.containerd.grpc.v1.cri".containerd]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
SystemdCgroup = true
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/docker/certs.d/"
- Final Configuration
containerd --config /etc/containerd/dump-containerd.toml config dump > /etc/containerd/config.toml
@czhfe , and pls show your kubeadm.yml file under : /var/lib/sealer/data/overlay2/{$layerid}/etc/kubeadm.yml
@czhfe , and pls show your kubeadm.yml file under :
/var/lib/sealer/data/overlay2/{$layerid}/etc/kubeadm.yml
apiVersion: kubeadm.k8s.io/v1beta2
kind: InitConfiguration
localAPIEndpoint:
bindPort: 6443
nodeRegistration:
criSocket: /run/containerd/containerd.sock
taints: null
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.19.16
imageRepository: sea.hub:5000/huis
apiServer:
extraArgs:
audit-log-format: json
audit-log-maxage: "7"
audit-log-maxbackup: "10"
audit-log-maxsize: "100"
audit-log-path: /var/log/kubernetes/audit.log
audit-policy-file: /etc/kubernetes/audit-policy.yml
enable-aggregator-routing: "true"
feature-gates: TTLAfterFinished=true,EphemeralContainers=true
extraVolumes:
- hostPath: /etc/kubernetes
mountPath: /etc/kubernetes
name: audit
pathType: DirectoryOrCreate
- hostPath: /var/log/kubernetes
mountPath: /var/log/kubernetes
name: audit-log
pathType: DirectoryOrCreate
- hostPath: /etc/localtime
mountPath: /etc/localtime
name: localtime
pathType: File
readOnly: true
controllerManager:
extraArgs:
experimental-cluster-signing-duration: 876000h
feature-gates: TTLAfterFinished=true,EphemeralContainers=true
extraVolumes:
- hostPath: /etc/localtime
mountPath: /etc/localtime
name: localtime
pathType: File
readOnly: true
dns:
type: ""
#imageRepository: sea.hub:5000/coredns
etcd:
local:
dataDir: ""
extraArgs:
listen-metrics-urls: http://0.0.0.0:2381
networking:
podSubnet: 10.16.0.0/12
serviceSubnet: 10.15.0.0/16
scheduler:
extraArgs:
feature-gates: TTLAfterFinished=true,EphemeralContainers=true
extraVolumes:
- hostPath: /etc/localtime
mountPath: /etc/localtime
name: localtime
pathType: File
readOnly: true
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
timeout: 5m0s
nodeRegistration:
criSocket: /run/containerd/containerd.sock
controlPlane:
localAPIEndpoint:
bindPort: 6443
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
anonymous:
enabled: false
webhook:
cacheTTL: 2m0s
enabled: true
x509:
clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
mode: Webhook
webhook:
cacheAuthorizedTTL: 5m0s
cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDomain: cluster.local
configMapAndSecretChangeDetectionStrategy: Watch
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuCFSQuotaPeriod: 100ms
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
imagefs.available: 15%
memory.available: 100Mi
nodefs.available: 10%
nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
logging: {}
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeLeaseDurationSeconds: 40
nodeStatusReportFrequency: 10s
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
port: 10250
registryBurst: 10
registryPullQPS: 5
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: ""
bindAddressHardFail: false
clientConnection:
acceptContentTypes: ""
burst: 0
contentType: ""
kubeconfig: ""
qps: 0
clusterCIDR: ""
configSyncPeriod: 0s
conntrack:
maxPerCore: null
min: null
tcpCloseWaitTimeout: null
tcpEstablishedTimeout: null
detectLocalMode: ""
enableProfiling: false
healthzBindAddress: ""
hostnameOverride: ""
iptables:
masqueradeAll: false
masqueradeBit: null
minSyncPeriod: 0s
syncPeriod: 0s
ipvs:
excludeCIDRs:
- 10.103.97.2/32
minSyncPeriod: 0s
scheduler: ""
strictARP: false
syncPeriod: 0s
tcpFinTimeout: 0s
tcpTimeout: 0s
udpTimeout: 0s
kind: KubeProxyConfiguration
metricsBindAddress: ""
mode: ipvs
nodePortAddresses: null
oomScoreAdj: null
portRange: ""
showHiddenMetricsForVersion: ""
udpIdleTimeout: 0s
winkernel:
enableDSR: false
networkName: ""
sourceVip: ""