kubekey icon indicating copy to clipboard operation
kubekey copied to clipboard

已更改配置文件镜像名为docker.1ms.run后,仍然拉取docker.io

Open mupeifeiyi opened this issue 2 weeks ago • 5 comments

What is version of KubeKey has the issue?

4.0.2

What is your os environment?

ubuntu 22.04

KubeKey config file

apiVersion: kubekey.kubesphere.io/v1
kind: Config
spec:
  download:
    # if set as "cn", so that online downloads will try to use available domestic sources whenever possible.
    zone: "cn"
  kubernetes:
    kube_version: v1.33.1
    # helm binary
    helm_version: v3.18.5
  etcd:
    # etcd binary
    etcd_version: v3.5.11
  image_registry:
    # ========== image registry ==========
    # keepalived image tag. Used for load balancing when there are multiple image registry nodes.
    keepalived_version: 2.0.20
    # ========== image registry: harbor ==========
    # harbor image tag
    harbor_version: v2.10.1
    # docker-compose binary
    dockercompose_version: v2.20.3
    # ========== image registry: docker-registry ==========
    # docker-registry image tag
    docker_registry_version: 2.8.3
  cri:
    # support: containerd,docker
    container_manager: containerd
    sandbox_image:
      tag: "3.9"
    # ========== cri ==========
    # crictl binary
    crictl_version: v1.33.0
    # ========== cri: docker ==========
    # docker binary
    docker_version: 24.0.7
    # cridockerd. Required when kube_version is greater than 1.24
    cridockerd_version: v0.3.1
    # ========== cri: containerd ==========
    # containerd binary
    containerd_version: v1.7.6
    # runc binary
    runc_version: v1.1.7
  cni:
    ipv6_support: false
    multus:
      image:
        tag: v4.3.0
    # ========== cni ==========
    # cni_plugins binary (optional)
    # cni_plugins_version: v1.2.0
    # ========== cni: calico ==========
    # calicoctl binary
    calico_version: v3.28.2
    # ========== cni: cilium ==========
    # cilium helm
    cilium_version: 1.18.3
    # ========== cni: kubeovn ==========
    # kubeovn helm
    kubeovn_version: 1.13.0
    # ========== cni: hybridnet ==========
    # hybridnet helm
    hybridnet_version: 0.6.8
  storage_class:
    # ========== storageclass ==========
    # ========== storageclass: local ==========
    local:
      provisioner_image:
        tag: 4.2.0
      linux_utils_image:
        tag: 4.2.0
    # ========== storageclass: nfs ==========
    # nfs provisioner helm version
    nfs_provisioner_version: 4.3.0
  dns:
    dns_image:
      tag: v1.12.1
    dns_cache_image:
      tag: 1.24.0
  image_manifests:
    - docker.1ms.run/calico/apiserver:v3.28.2
    - docker.1ms.run/calico/cni:v3.28.2
    - docker.1ms.run/calico/ctl:v3.28.2
    - docker.1ms.run/calico/csi:v3.28.2
    - docker.1ms.run/calico/kube-controllers:v3.28.2
    - docker.1ms.run/calico/node-driver-registrar:v3.28.2
    - docker.1ms.run/calico/node:v3.28.2
    - docker.1ms.run/calico/pod2daemon-flexvol:v3.28.2
    - docker.1ms.run/calico/typha:v3.28.2
    - docker.1ms.run/kubesphere/coredns:v1.12.1
    - docker.1ms.run/kubesphere/k8s-dns-node-cache:1.24.0
    - docker.1ms.run/kubesphere/kube-apiserver:v1.33.1
    - docker.1ms.run/kubesphere/kube-controller-manager:v1.33.1
    - docker.1ms.run/kubesphere/kube-proxy:v1.33.1
    - docker.1ms.run/kubesphere/kube-scheduler:v1.33.1
    - docker.1ms.run/kubesphere/pause:3.9
    - docker.1ms.run/openebs/linux-utils:4.2.0
    - docker.1ms.run/openebs/provisioner-localpv:4.2.0
    - quay.io/tigera/operator:v1.34.5
    - docker.1ms.run/library/haproxy:2.9.6-alpine

A clear and concise description of what happend.

Image

Relevant log output

Dec  8 02:20:44 meshery-master systemd[1]: Finished Refresh fwupd metadata and update motd.
Dec  8 02:20:45 meshery-master systemd[1]: Reloading.
Dec  8 02:20:45 meshery-master systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
Dec  8 02:20:46 meshery-master systemd[1]: Reloading.
Dec  8 02:20:46 meshery-master systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
Dec  8 02:20:46 meshery-master containerd[36641]: time="2025-12-08T10:20:46.842415836+08:00" level=info msg="PullImage \"docker.io/kubesphere/kube-apiserver:v1.33.1\""
Dec  8 02:20:47 meshery-master chronyd[37353]: Selected source 111.230.189.174 (cn.pool.ntp.org)
Dec  8 02:20:47 meshery-master chronyd[37353]: System clock TAI offset set to 37 seconds
Dec  8 02:21:12 meshery-master systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  8 02:21:16 meshery-master containerd[36641]: time="2025-12-08T10:21:16.843810885+08:00" level=info msg="trying next host" error="failed to do request: Head \"https://registry-1.docker.io/v2/kubesphere/kube-apiserver/manifests/v1.33.1\": dial tcp 185.60.216.36:443: i/o timeout" host=registry-1.docker.io

Additional information

No response

mupeifeiyi avatar Dec 08 '25 02:12 mupeifeiyi

config.yaml 文件中的 image_manifests 字段是用来打离线包。你需要修改的应该是: 方案一. 在config.yaml中添加如下字段

spec:
  image_registry:
    dockerio_registry: docker.1ms.run

然后执行 kk create cluster -i inventory.yaml -c config.yaml 方案二:直接执行 kk create cluster -i inventory.yaml -c config.yaml --set image_registry.dockerio_registry=docker.1ms.run

redscholar avatar Dec 08 '25 02:12 redscholar

config.yaml 文件中的 image_manifests 字段是用来打离线包。你需要修改的应该是: 方案一. 在config.yaml中添加如下字段

spec: image_registry: dockerio_registry: docker.1ms.run 然后执行 kk create cluster -i inventory.yaml -c config.yaml 方案二:直接执行 kk create cluster -i inventory.yaml -c config.yaml --set image_registry.dockerio_registry=docker.1ms.run

好的,了解了,两种方法都试了,方案一更改后,没生效,方案二是成功的,方案二执行后,生成的containerd配置文件中的sandbox_image = "docker.io/kubesphere/pause:3.9",还需要手动修改一下,就好了

mupeifeiyi avatar Dec 08 '25 03:12 mupeifeiyi

sandbox_image

方案一,是不是层级设置的有问题。完整config.yaml结构

apiVersion: kubekey.kubesphere.io/v1
kind: Config
spec:
  image_registry:
    dockerio_registry: docker.1ms.run

关于sandbox_image 是不是之前安装过containerd。需要执行kk delete cluster -i inventory.yaml --all 清除containerd才行。

redscholar avatar Dec 08 '25 07:12 redscholar

sandbox_image

方案一,是不是层级设置的有问题。完整config.yaml结构

apiVersion: kubekey.kubesphere.io/v1 kind: Config spec: image_registry: dockerio_registry: docker.1ms.run 关于sandbox_image 是不是之前安装过containerd。需要执行kk delete cluster -i inventory.yaml --all 清除containerd才行。

我根据您的提示,--all删除了集群,完整config.yaml如下:

apiVersion: kubekey.kubesphere.io/v1
kind: Config
spec:
  image_registry:
    dockerio_registry: docker.1ms.run
  download:
    # if set as "cn", so that online downloads will try to use available domestic sources whenever possible.
    zone: "cn"
  kubernetes:
    kube_version: v1.33.1
    # helm binary
    helm_version: v3.18.5
  etcd:
    # etcd binary
    etcd_version: v3.5.11
  image_registry:
    # ========== image registry ==========
    # keepalived image tag. Used for load balancing when there are multiple image registry nodes.
    keepalived_version: 2.0.20
    # ========== image registry: harbor ==========
    # harbor image tag
    harbor_version: v2.10.1
    # docker-compose binary
    dockercompose_version: v2.20.3
    # ========== image registry: docker-registry ==========
    # docker-registry image tag
    docker_registry_version: 2.8.3
  cri:
    # support: containerd,docker
    container_manager: containerd
    sandbox_image:
      tag: "3.9"
    # ========== cri ==========
    # crictl binary
    crictl_version: v1.33.0
    # ========== cri: docker ==========
    # docker binary
    docker_version: 24.0.7
    # cridockerd. Required when kube_version is greater than 1.24
    cridockerd_version: v0.3.1
    # ========== cri: containerd ==========
    # containerd binary
    containerd_version: v1.7.6
    # runc binary
    runc_version: v1.1.7
  cni:
    ipv6_support: false
    multus:
      image:
        tag: v4.3.0
    # ========== cni ==========
    # cni_plugins binary (optional)
    # cni_plugins_version: v1.2.0
    # ========== cni: calico ==========
    # calicoctl binary
    calico_version: v3.28.2
    # ========== cni: cilium ==========
    # cilium helm
    cilium_version: 1.18.3
    # ========== cni: kubeovn ==========
    # kubeovn helm
    kubeovn_version: 1.13.0
    # ========== cni: hybridnet ==========
    # hybridnet helm
    hybridnet_version: 0.6.8
  storage_class:
    # ========== storageclass ==========
    # ========== storageclass: local ==========
    local:
      provisioner_image:
        tag: 4.2.0
      linux_utils_image:
        tag: 4.2.0
    # ========== storageclass: nfs ==========
    # nfs provisioner helm version
    nfs_provisioner_version: 4.3.0
  dns:
    dns_image:
      tag: v1.12.1
    dns_cache_image:
      tag: 1.24.0
  image_manifests:
    - docker.1ms.run/calico/apiserver:v3.28.2
    - docker.1ms.run/calico/cni:v3.28.2
    - docker.1ms.run/calico/ctl:v3.28.2
    - docker.1ms.run/calico/csi:v3.28.2
    - docker.1ms.run/calico/kube-controllers:v3.28.2
    - docker.1ms.run/calico/node-driver-registrar:v3.28.2
    - docker.1ms.run/calico/node:v3.28.2
    - docker.1ms.run/calico/pod2daemon-flexvol:v3.28.2
    - docker.1ms.run/calico/typha:v3.28.2
    - docker.1ms.run/kubesphere/coredns:v1.12.1
    - docker.1ms.run/kubesphere/k8s-dns-node-cache:1.24.0
    - docker.1ms.run/kubesphere/kube-apiserver:v1.33.1
    - docker.1ms.run/kubesphere/kube-controller-manager:v1.33.1
    - docker.1ms.run/kubesphere/kube-proxy:v1.33.1
    - docker.1ms.run/kubesphere/kube-scheduler:v1.33.1
    - docker.1ms.run/kubesphere/pause:3.9
    - docker.1ms.run/openebs/linux-utils:4.2.0
    - docker.1ms.run/openebs/provisioner-localpv:4.2.0
    - quay.io/tigera/operator:v1.34.5
    - docker.1ms.run/library/haproxy:2.9.6-alpine

然后执行了./kk create cluster -i inventory.yaml -c config.yaml,并观察了/etc/containerd/config.toml,确实还是docker.io 重新删除集群all,并在重新创建集群时执行了./kk create cluster -i inventory.yaml -c config.yaml --set image_registry.dockerio_registry=docker.1ms.run,这次containerd中的sanbox_image变成了docker.1ms.run

mupeifeiyi avatar Dec 09 '25 07:12 mupeifeiyi

sandbox_image

方案一,是不是层级设置的有问题。完整config.yaml结构 apiVersion: kubekey.kubesphere.io/v1 kind: Config spec: image_registry: dockerio_registry: docker.1ms.run 关于sandbox_image 是不是之前安装过containerd。需要执行kk delete cluster -i inventory.yaml --all 清除containerd才行。

我根据您的提示,--all删除了集群,完整config.yaml如下:

apiVersion: kubekey.kubesphere.io/v1
kind: Config
spec:
  image_registry:
    dockerio_registry: docker.1ms.run
  download:
    # if set as "cn", so that online downloads will try to use available domestic sources whenever possible.
    zone: "cn"
  kubernetes:
    kube_version: v1.33.1
    # helm binary
    helm_version: v3.18.5
  etcd:
    # etcd binary
    etcd_version: v3.5.11
  image_registry:
    # ========== image registry ==========
    # keepalived image tag. Used for load balancing when there are multiple image registry nodes.
    keepalived_version: 2.0.20
    # ========== image registry: harbor ==========
    # harbor image tag
    harbor_version: v2.10.1
    # docker-compose binary
    dockercompose_version: v2.20.3
    # ========== image registry: docker-registry ==========
    # docker-registry image tag
    docker_registry_version: 2.8.3
  cri:
    # support: containerd,docker
    container_manager: containerd
    sandbox_image:
      tag: "3.9"
    # ========== cri ==========
    # crictl binary
    crictl_version: v1.33.0
    # ========== cri: docker ==========
    # docker binary
    docker_version: 24.0.7
    # cridockerd. Required when kube_version is greater than 1.24
    cridockerd_version: v0.3.1
    # ========== cri: containerd ==========
    # containerd binary
    containerd_version: v1.7.6
    # runc binary
    runc_version: v1.1.7
  cni:
    ipv6_support: false
    multus:
      image:
        tag: v4.3.0
    # ========== cni ==========
    # cni_plugins binary (optional)
    # cni_plugins_version: v1.2.0
    # ========== cni: calico ==========
    # calicoctl binary
    calico_version: v3.28.2
    # ========== cni: cilium ==========
    # cilium helm
    cilium_version: 1.18.3
    # ========== cni: kubeovn ==========
    # kubeovn helm
    kubeovn_version: 1.13.0
    # ========== cni: hybridnet ==========
    # hybridnet helm
    hybridnet_version: 0.6.8
  storage_class:
    # ========== storageclass ==========
    # ========== storageclass: local ==========
    local:
      provisioner_image:
        tag: 4.2.0
      linux_utils_image:
        tag: 4.2.0
    # ========== storageclass: nfs ==========
    # nfs provisioner helm version
    nfs_provisioner_version: 4.3.0
  dns:
    dns_image:
      tag: v1.12.1
    dns_cache_image:
      tag: 1.24.0
  image_manifests:
    - docker.1ms.run/calico/apiserver:v3.28.2
    - docker.1ms.run/calico/cni:v3.28.2
    - docker.1ms.run/calico/ctl:v3.28.2
    - docker.1ms.run/calico/csi:v3.28.2
    - docker.1ms.run/calico/kube-controllers:v3.28.2
    - docker.1ms.run/calico/node-driver-registrar:v3.28.2
    - docker.1ms.run/calico/node:v3.28.2
    - docker.1ms.run/calico/pod2daemon-flexvol:v3.28.2
    - docker.1ms.run/calico/typha:v3.28.2
    - docker.1ms.run/kubesphere/coredns:v1.12.1
    - docker.1ms.run/kubesphere/k8s-dns-node-cache:1.24.0
    - docker.1ms.run/kubesphere/kube-apiserver:v1.33.1
    - docker.1ms.run/kubesphere/kube-controller-manager:v1.33.1
    - docker.1ms.run/kubesphere/kube-proxy:v1.33.1
    - docker.1ms.run/kubesphere/kube-scheduler:v1.33.1
    - docker.1ms.run/kubesphere/pause:3.9
    - docker.1ms.run/openebs/linux-utils:4.2.0
    - docker.1ms.run/openebs/provisioner-localpv:4.2.0
    - quay.io/tigera/operator:v1.34.5
    - docker.1ms.run/library/haproxy:2.9.6-alpine

然后执行了./kk create cluster -i inventory.yaml -c config.yaml,并观察了/etc/containerd/config.toml,确实还是docker.io 重新删除集群all,并在重新创建集群时执行了./kk create cluster -i inventory.yaml -c config.yaml --set image_registry.dockerio_registry=docker.1ms.run,这次containerd中的sanbox_image变成了docker.1ms.run

我找到问题了。config.yaml里面spec下有两个image_registry。这两个要合并到一起

Image

redscholar avatar Dec 09 '25 08:12 redscholar

好的,我有时间再试下哈,辛苦

mupeifeiyi avatar Dec 15 '25 01:12 mupeifeiyi

sandbox_image

方案一,是不是层级设置的有问题。完整config.yaml结构 apiVersion: kubekey.kubesphere.io/v1 kind: Config spec: image_registry: dockerio_registry: docker.1ms.run 关于sandbox_image 是不是之前安装过containerd。需要执行kk delete cluster -i inventory.yaml --all 清除containerd才行。

我根据您的提示,--all删除了集群,完整config.yaml如下:

apiVersion: kubekey.kubesphere.io/v1
kind: Config
spec:
  image_registry:
    dockerio_registry: docker.1ms.run
  download:
    # if set as "cn", so that online downloads will try to use available domestic sources whenever possible.
    zone: "cn"
  kubernetes:
    kube_version: v1.33.1
    # helm binary
    helm_version: v3.18.5
  etcd:
    # etcd binary
    etcd_version: v3.5.11
  image_registry:
    # ========== image registry ==========
    # keepalived image tag. Used for load balancing when there are multiple image registry nodes.
    keepalived_version: 2.0.20
    # ========== image registry: harbor ==========
    # harbor image tag
    harbor_version: v2.10.1
    # docker-compose binary
    dockercompose_version: v2.20.3
    # ========== image registry: docker-registry ==========
    # docker-registry image tag
    docker_registry_version: 2.8.3
  cri:
    # support: containerd,docker
    container_manager: containerd
    sandbox_image:
      tag: "3.9"
    # ========== cri ==========
    # crictl binary
    crictl_version: v1.33.0
    # ========== cri: docker ==========
    # docker binary
    docker_version: 24.0.7
    # cridockerd. Required when kube_version is greater than 1.24
    cridockerd_version: v0.3.1
    # ========== cri: containerd ==========
    # containerd binary
    containerd_version: v1.7.6
    # runc binary
    runc_version: v1.1.7
  cni:
    ipv6_support: false
    multus:
      image:
        tag: v4.3.0
    # ========== cni ==========
    # cni_plugins binary (optional)
    # cni_plugins_version: v1.2.0
    # ========== cni: calico ==========
    # calicoctl binary
    calico_version: v3.28.2
    # ========== cni: cilium ==========
    # cilium helm
    cilium_version: 1.18.3
    # ========== cni: kubeovn ==========
    # kubeovn helm
    kubeovn_version: 1.13.0
    # ========== cni: hybridnet ==========
    # hybridnet helm
    hybridnet_version: 0.6.8
  storage_class:
    # ========== storageclass ==========
    # ========== storageclass: local ==========
    local:
      provisioner_image:
        tag: 4.2.0
      linux_utils_image:
        tag: 4.2.0
    # ========== storageclass: nfs ==========
    # nfs provisioner helm version
    nfs_provisioner_version: 4.3.0
  dns:
    dns_image:
      tag: v1.12.1
    dns_cache_image:
      tag: 1.24.0
  image_manifests:
    - docker.1ms.run/calico/apiserver:v3.28.2
    - docker.1ms.run/calico/cni:v3.28.2
    - docker.1ms.run/calico/ctl:v3.28.2
    - docker.1ms.run/calico/csi:v3.28.2
    - docker.1ms.run/calico/kube-controllers:v3.28.2
    - docker.1ms.run/calico/node-driver-registrar:v3.28.2
    - docker.1ms.run/calico/node:v3.28.2
    - docker.1ms.run/calico/pod2daemon-flexvol:v3.28.2
    - docker.1ms.run/calico/typha:v3.28.2
    - docker.1ms.run/kubesphere/coredns:v1.12.1
    - docker.1ms.run/kubesphere/k8s-dns-node-cache:1.24.0
    - docker.1ms.run/kubesphere/kube-apiserver:v1.33.1
    - docker.1ms.run/kubesphere/kube-controller-manager:v1.33.1
    - docker.1ms.run/kubesphere/kube-proxy:v1.33.1
    - docker.1ms.run/kubesphere/kube-scheduler:v1.33.1
    - docker.1ms.run/kubesphere/pause:3.9
    - docker.1ms.run/openebs/linux-utils:4.2.0
    - docker.1ms.run/openebs/provisioner-localpv:4.2.0
    - quay.io/tigera/operator:v1.34.5
    - docker.1ms.run/library/haproxy:2.9.6-alpine

然后执行了./kk create cluster -i inventory.yaml -c config.yaml,并观察了/etc/containerd/config.toml,确实还是docker.io 重新删除集群all,并在重新创建集群时执行了./kk create cluster -i inventory.yaml -c config.yaml --set image_registry.dockerio_registry=docker.1ms.run,这次containerd中的sanbox_image变成了docker.1ms.run

我找到问题了。config.yaml里面spec下有两个image_registry。这两个要合并到一起

Image

测试过了,没问题哈,感谢解答

mupeifeiyi avatar Dec 15 '25 07:12 mupeifeiyi