kubekey
kubekey copied to clipboard
Using packages to create a cluster, the solution is not expected
What is version of KubeKey has the issue?
3.0.7
What is your os environment?
Kylin Linux Advanced Server V10 (Sword)
KubeKey config file
manifest.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
name: kylin-linux-v10
spec:
arches:
- amd64
operatingSystems:
- arch: amd64
type: linux
id: kylin
version: "V10"
osImage: Kylin Linux Advanced Server V10 (Sword)
repository:
iso:
localPath: /root/KylinOS.iso
kubernetesDistributions:
- type: kubernetes
version: v1.23.10
components:
helm:
version: v3.11.1
cni:
version: v1.2.0
etcd:
version: v3.5.7
containerRuntimes:
- type: docker
version: 20.10.23
crictl:
version: v1.24.0
harbor:
version: v2.7.1
docker-compose:
version: v2.16.0
images:
- docker.io/calico/cni:v3.23.2
- docker.io/calico/kube-controllers:v3.23.2
- docker.io/calico/node:v3.23.2
- docker.io/calico/pod2daemon-flexvol:v3.23.2
- docker.io/coredns/coredns:1.8.6
- docker.io/kubesphere/k8s-dns-node-cache:1.15.12
- docker.io/kubesphere/kube-apiserver:v1.23.10
- docker.io/kubesphere/kube-controller-manager:v1.23.10
- docker.io/kubesphere/kube-proxy:v1.23.10
- docker.io/kubesphere/kube-scheduler:v1.23.10
- docker.io/kubesphere/pause:3.6
registry:
auths: {}
config-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: kube-master-01, address: 172.17.30.150, internalAddress: 172.17.30.150, privateKeyPath: "/root/.ssh/xxxx"}
- {name: kube-master-02, address: 172.17.30.151, internalAddress: 172.17.30.151, privateKeyPath: "/root/.ssh/xxxx"}
- {name: kube-master-03, address: 172.17.30.152, internalAddress: 172.17.30.152, privateKeyPath: "/root/.ssh/xxxx"}
- {name: kube-worker-01, address: 172.17.30.154, internalAddress: 172.17.30.154, privateKeyPath: "/root/.ssh/xxxx"}
- {name: kube-worker-02, address: 172.17.30.155, internalAddress: 172.17.30.155, privateKeyPath: "/root/.ssh/xxxx"}
- {name: kube-worker-03, address: 172.17.30.156, internalAddress: 172.17.30.156, privateKeyPath: "/root/.ssh/xxxx"}
- {name: kube-worker-04, address: 172.17.30.157, internalAddress: 172.17.30.157, privateKeyPath: "/root/.ssh/xxxx"}
- {name: kube-worker-05, address: 172.17.30.158, internalAddress: 172.17.30.158, privateKeyPath: "/root/.ssh/xxxx"}
- {name: kube-worker-06, address: 172.17.30.159, internalAddress: 172.17.30.159, privateKeyPath: "/root/.ssh/xxxx"}
roleGroups:
etcd:
- kube-master-01
- kube-master-02
- kube-master-03
control-plane:
- kube-master-01
- kube-master-02
- kube-master-03
worker:
- kube-worker-01
- kube-worker-02
- kube-worker-03
- kube-worker-04
- kube-worker-05
- kube-worker-06
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: "xxxxxxxxxx"
port: 6443
system:
ntpServers:
- "xxxxxxxxx"
timezone: "Asia/Shanghai"
kubernetes:
version: v1.23.10
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
privateRegistry: "xxxxxxxx"
namespaceOverride: "kubernetes"
registryMirrors: []
insecureRegistries: []
auths:
"xxxxxx":
username: "robot$kubernetes"
password: xxxxxxxxxxxx
addons:
- name: nfs-client
namespace: kube-system
sources:
chart:
name: nfs-client-provisioner
repo: https://xxxxxxxxxx/chartrepo/main
values:
- image.repository=xxxxxxxxx/kubernetes/nfs-subdir-external-provisioner
- storageClass.name=nfs-client
- storageClass.defaultClass=true
- nfs.server=xxxxxxxx
- nfs.path=/data/kubernetes
- nfs.mountOptions={nfsvers=4,proto=tcp,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport}
- nfs.volumeName=nfs-client
A clear and concise description of what happend.
- exec
kk artifact export -m manifest.yamlexportkubekey-artifact.tar.gzfile - exec
kk create cluster -f config-sample..yaml --with-packages -a kubekey-artifact.tar.gz
The --with-packages parameter did not use the version in kubekey-artifact.tar.gz as expected to install components, such as: docker, helm.
Relevant log output
No response
Additional information
No response
3.x does not support custom versions of Docker and Helm, but this will be supported in 4.x.