kubekey
kubekey copied to clipboard
在centos7上离线安装失败
What is version of KubeKey has the issue?
kk version: &version.Info{Major:"3", Minor:"0", GitVersion:"v3.0.10", GitCommit:"3e381c6d5556117d132326b58c5177e0b0e839b6", GitTreeState:"clean", BuildDate:"2023-07-28T06:08:59Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}
What is your os environment?
Centos 7.6
KubeKey config file
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: master, address: 192.168.12.10, internalAddress: 192.168.12.10, user: root, password: "junan@123"}
- {name: node1, address: 192.168.12.11, internalAddress: 192.168.12.11, user: root, password: "junan@123"}
- {name: node2, address: 192.168.12.12, internalAddress: 192.168.12.12, user: root, password: "junan@123"}
roleGroups:
etcd:
- master
control-plane:
- master
worker:
- node1
- node2
registry:
- master
controlPlaneEndpoint:
## Internal loadbalancer for apiservers
# internalLoadbalancer: haproxy
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.21.5
clusterName: cluster.local
autoRenewCerts: true
containerManager: docker
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
multusCNI:
enabled: false
registry:
type: harbor
auths:
"dockerhub.kubekey.local":
username: admin
password: Harbor12345
privateRegistry: "dockerhub.kubekey.local"
namespaceOverride: "kubesphereio"
registryMirrors: []
insecureRegistries: []
addons: []
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.4.0
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
namespace_override: ""
# dev_tag: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
core:
console:
enableMultiLogin: true
port: 30880
type: NodePort
# apiserver:
# resources: {}
# controllerManager:
# resources: {}
redis:
enabled: false
enableHA: false
volumeSize: 2Gi
openldap:
enabled: false
volumeSize: 2Gi
minio:
volumeSize: 20Gi
monitoring:
# type: external
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
GPUMonitoring:
enabled: false
gpu:
kinds:
- resourceName: "nvidia.com/gpu"
resourceType: "GPU"
default: true
es:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchHost: ""
externalElasticsearchPort: ""
opensearch:
# master:
# volumeSize: 4Gi
# replicas: 1
# resources: {}
# data:
# volumeSize: 20Gi
# replicas: 1
# resources: {}
enabled: true
logMaxAge: 7
opensearchPrefix: whizard
basicAuth:
enabled: true
username: "admin"
password: "admin"
externalOpensearchHost: ""
externalOpensearchPort: ""
dashboard:
enabled: false
alerting:
enabled: false
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
# operator:
# resources: {}
# webhook:
# resources: {}
devops:
enabled: false
jenkinsCpuReq: 0.5
jenkinsCpuLim: 1
jenkinsMemoryReq: 4Gi
jenkinsMemoryLim: 4Gi
jenkinsVolumeSize: 16Gi
events:
enabled: false
# operator:
# resources: {}
# exporter:
# resources: {}
# ruler:
# enabled: true
# replicas: 2
# resources: {}
logging:
enabled: false
logsidecar:
enabled: true
replicas: 2
# resources: {}
metrics_server:
enabled: false
monitoring:
storageClass: ""
node_exporter:
port: 9100
# resources: {}
# kube_rbac_proxy:
# resources: {}
# kube_state_metrics:
# resources: {}
# prometheus:
# replicas: 1
# volumeSize: 20Gi
# resources: {}
# operator:
# resources: {}
# alertmanager:
# replicas: 1
# resources: {}
# notification_manager:
# resources: {}
# operator:
# resources: {}
# proxy:
# resources: {}
gpu:
nvidia_dcgm_exporter:
enabled: false
# resources: {}
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
istio:
components:
ingressGateways:
- name: istio-ingressgateway
enabled: false
cni:
enabled: false
edgeruntime:
enabled: false
kubeedge:
enabled: false
cloudCore:
cloudHub:
advertiseAddress:
- ""
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
# resources: {}
# hostNetWork: false
iptables-manager:
enabled: true
mode: "external"
# resources: {}
# edgeService:
# resources: {}
gatekeeper:
enabled: false
# controller_manager:
# resources: {}
# audit:
# resources: {}
terminal:
timeout: 600
A clear and concise description of what happend.
我在centos7.6上面制作了离线包,下面是manifest-sample.yaml
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Manifest
metadata:
name: sample
spec:
arches:
- amd64
operatingSystems:
- arch: amd64
type: linux
id: centos
version: "7"
osImage: CentOS Linux 7 (Core)
repository:
iso:
localPath: /root/kubesphere-offine-make/centos-7-amd64-rpms.iso
kubernetesDistributions:
- type: kubernetes
version: v1.21.5
components:
helm:
version: v3.9.0
cni:
version: v1.2.0
etcd:
version: v3.4.13
calicoctl:
version: v3.23.2
containerRuntimes:
- type: docker
version: 20.10.8
crictl:
version: v1.24.0
docker-registry:
version: "2"
harbor:
version: v2.4.1
docker-compose:
version: v2.2.2
images:
- docker.io/calico/cni:v3.23.2
- docker.io/calico/kube-controllers:v3.23.2
- docker.io/calico/node:v3.23.2
- docker.io/calico/pod2daemon-flexvol:v3.23.2
- docker.io/coredns/coredns:1.8.0
- docker.io/kubesphere/k8s-dns-node-cache:1.15.12
- docker.io/kubesphere/kube-apiserver:v1.21.5
- docker.io/kubesphere/kube-controller-manager:v1.21.5
- docker.io/kubesphere/kube-proxy:v1.21.5
- docker.io/kubesphere/kube-scheduler:v1.21.5
- docker.io/kubesphere/pause:3.4.1
制作完成之后,通过一下命令部署集群,出现了错误
./kk create cluster -f config-sample.yaml -a kubesphere.tar.gz --with-packages
错误信息:
... .... ...
Getting image source signatures
Getting image source signatures
Getting image source signatures
Getting image source signatures
Getting image source signatures
17:38:40 CST success: [LocalHost]
17:38:40 CST [CopyImagesToRegistryModule] Push multi-arch manifest to private registry
17:38:40 CST message: [LocalHost]
get manifest list failed by module cache
17:38:40 CST failed: [LocalHost]
error: Pipeline[CreateClusterPipeline] execute failed: Module[CopyImagesToRegistryModule] exec failed:
failed: [LocalHost] [PushManifest] exec failed after 1 retries: get manifest list failed by module cache
Relevant log output
错误信息:
... .... ...
Getting image source signatures
Getting image source signatures
Getting image source signatures
Getting image source signatures
Getting image source signatures
17:38:40 CST success: [LocalHost]
17:38:40 CST [CopyImagesToRegistryModule] Push multi-arch manifest to private registry
17:38:40 CST message: [LocalHost]
get manifest list failed by module cache
17:38:40 CST failed: [LocalHost]
error: Pipeline[CreateClusterPipeline] execute failed: Module[CopyImagesToRegistryModule] exec failed:
failed: [LocalHost] [PushManifest] exec failed after 1 retries: get manifest list failed by module cache
### Additional information
_No response_
我使用kubekey v3.0.7 ,离线k8s v1.23.10和kubesphere v3.3.2 没有出现以上问题
I think it may be caused by the following two cases.
- There is no project to create for images in the images repository (harbor).
- The kubekey directory in the current directory existed before installation, which caused the files in kubeke/images directory to conflict with the newly extracted files. You can try to remove kubekey directory and re-execute the offline installation command.
hello hello Did you solve this problem? I have the same problem, can you tell me how to solve it
我使用kubekey v3.0.7 ,离线k8s v1.23.10和kubesphere v3.3.2 没有出现以上问题