kubesphere icon indicating copy to clipboard operation
kubesphere copied to clipboard

when I install kubernetes by kk and offline package

Open Juminiy opened this issue 1 year ago • 0 comments

Describe the Bug

When the kubesphere trying the last step:

17:11:04 CST stdout: [k8s-master]
namespace/kubesphere-system unchanged
serviceaccount/ks-installer unchanged
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io unchanged
clusterrole.rbac.authorization.k8s.io/ks-installer unchanged
clusterrolebinding.rbac.authorization.k8s.io/ks-installer unchanged
deployment.apps/ks-installer unchanged
clusterconfiguration.installer.kubesphere.io/ks-installer created
17:11:04 CST success: [k8s-master]
Please wait for the installation to complete:   >>--->
17:11:05 CST command: [k8s-master]
sudo -E /bin/bash -c "/usr/local/bin/kubectl exec -n kubesphere-system $(kubectl get pod -n kubesphere-system -                                                                                                                                                               l app=ks-installer -o jsonpath='{.items[0].metadata.name}') -- ls /kubesphere/playbooks/kubesphere_running"
17:11:05 CST stdout: [k8s-master]
error: unable to upgrade connection: container not found ("installer")
17:11:05 CST stderr: [k8s-master]
Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubectl exec -n kubesphere-system $(kubectl get po                                                                                                                                                               d -n kubesphere-system -l app=ks-installer -o jsonpath='{.items[0].metadata.name}') -- ls /kubesphere/playbooks                                                                                                                                                               /kubesphere_running"

it execute the command repeatedly and failure ever for ever:

sudo -E /bin/bash -c "/usr/local/bin/kubectl exec -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-installer -o jsonpath='{.items[0].metadata.name}') -- ls /kubesphere/playbooks/kubesphere_running"
17:36:29 CST stdout: [k8s-master]
ls: /kubesphere/playbooks/kubesphere_running: No such file or directory

it cannot get the file, and it seemed to retrying untermited forever.

Versions Used

  1. KubeSphere:

  2. kubectl [root@k8s-master kub_env]# kubectl version Client Version: v1.29.3 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.3

  3. kubekey [root@k8s-master kub_env]# ./kk version kk version: &version.Info{Major:"3", Minor:"1", GitVersion:"v3.1.3", GitCommit:"b3bb8538ee7518282b733040b068639070c19709", GitTreeState:"clean", BuildDate:"2024-08-01T04:28:53Z", GoVersion:"go1.19.2", Compiler:"gc", Platform:"linux/amd64"}

Environment config yaml file below

apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
  name: sample
spec:
  hosts:
  - {name: k8s-master, address: 192.168.31.17, internalAddress: 192.168.31.17, user: root, password: "taveen@227"}
  - {name: k8s-node1, address: 192.168.31.18, internalAddress: 192.168.31.18, user: root, password: "taveen@227"}
  - {name: k8s-node2, address: 192.168.31.19, internalAddress: 192.168.31.19, user: root, password: "taveen@227"}
  roleGroups:
    etcd:
    - k8s-master
    - k8s-node1
    - k8s-node2
    control-plane:
    - k8s-master
    worker:
    - k8s-node1
    - k8s-node2
  controlPlaneEndpoint:
    ## Internal loadbalancer for apiservers
    internalLoadbalancer: haproxy

    domain: lb.kubesphere.local
    address: ""
    port: 6443
  kubernetes:
    version: v1.29.3
    clusterName: cluster.local
    autoRenewCerts: true
    containerManager: containerd
  etcd:
    type: kubekey
  network:
    plugin: calico
    kubePodsCIDR: 10.233.64.0/18
    kubeServiceCIDR: 10.233.0.0/18
    ## multus support. https://github.com/k8snetworkplumbingwg/multus-cni
    multusCNI:
      enabled: false
  registry:
    type: harbor
    auths:
      "192.168.31.242:8662":
        username: admin
        password: Harbor12345
        plainHTTP: true
    privateRegistry: "192.168.31.242:8662"
    namespaceOverride: "kubesphereio"
    registryMirrors: []
    insecureRegistries: ["192.168.31.242:8662"]

  addons: []



---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
  name: ks-installer
  namespace: kubesphere-system
  labels:
    version: v3.4.1
spec:
  namespace_override: kubesphereio
  persistence:
    storageClass: ""
  authentication:
    jwtSecret: ""
  local_registry: ""
  # dev_tag: ""
  etcd:
    monitoring: true
    endpointIps: localhost
    port: 2379
    tlsEnable: true
  common:
    core:
      console:
        enableMultiLogin: true
        port: 30880
        type: NodePort
    # apiserver:
    #  resources: {}
    # controllerManager:
    #  resources: {}
    redis:
      enabled: false
      enableHA: false
      volumeSize: 2Gi
    openldap:
      enabled: false
      volumeSize: 2Gi
    minio:
      volumeSize: 20Gi
    monitoring:
      # type: external
      endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
      GPUMonitoring:
        enabled: false
    gpu:
      kinds:
      - resourceName: "nvidia.com/gpu"
        resourceType: "GPU"
        default: true
    es:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      enabled: false
      logMaxAge: 7
      elkPrefix: logstash
      basicAuth:
        enabled: false
        username: ""
        password: ""
      externalElasticsearchHost: ""
      externalElasticsearchPort: ""
    opensearch:
      # master:
      #   volumeSize: 4Gi
      #   replicas: 1
      #   resources: {}
      # data:
      #   volumeSize: 20Gi
      #   replicas: 1
      #   resources: {}
      enabled: true
      logMaxAge: 7
      opensearchPrefix: whizard
      basicAuth:
        enabled: true
        username: "admin"
        password: "admin"
      externalOpensearchHost: ""
      externalOpensearchPort: ""
      dashboard:
        enabled: false
  alerting:
    enabled: true
    # thanosruler:
    #   replicas: 1
    #   resources: {}
  auditing:
    enabled: true
    # operator:
    #   resources: {}
    # webhook:
    #   resources: {}
  devops:
    enabled: false
    jenkinsCpuReq: 0.5
    jenkinsCpuLim: 1
    jenkinsMemoryReq: 4Gi
    jenkinsMemoryLim: 4Gi
    jenkinsVolumeSize: 16Gi
  events:
    enabled: true
    # operator:
    #   resources: {}
    # exporter:
    #   resources: {}
    ruler:
      enabled: true
      replicas: 2
    #   resources: {}
  logging:
    enabled: true
    logsidecar:
      enabled: true
      replicas: 2
      # resources: {}
  metrics_server:
    enabled: true
  monitoring:
    storageClass: ""
    node_exporter:
      port: 9100
      # resources: {}
    # kube_rbac_proxy:
    #   resources: {}
    # kube_state_metrics:
    #   resources: {}
    # prometheus:
    #   replicas: 1
    #   volumeSize: 20Gi
    #   resources: {}
    #   operator:
    #     resources: {}
    # alertmanager:
    #   replicas: 1
    #   resources: {}
    # notification_manager:
    #   resources: {}
    #   operator:
    #     resources: {}
    #   proxy:
    #     resources: {}
    gpu:
      nvidia_dcgm_exporter:
        enabled: false
        # resources: {}
  multicluster:
    clusterRole: none
  network:
    networkpolicy:
      enabled: true
    ippool:
      type: calico
    topology:
      type: weave-scope
  openpitrix:
    store:
      enabled: false
  servicemesh:
    enabled: false
    istio:
      components:
        ingressGateways:
        - name: istio-ingressgateway
          enabled: false
        cni:
          enabled: false
  edgeruntime:
    enabled: false
    kubeedge:
      enabled: false
      cloudCore:
        cloudHub:
          advertiseAddress:
            - ""
        service:
          cloudhubNodePort: "30000"
          cloudhubQuicNodePort: "30001"
          cloudhubHttpsNodePort: "30002"
          cloudstreamNodePort: "30003"
          tunnelNodePort: "30004"
        # resources: {}
        # hostNetWork: false
      iptables-manager:
        enabled: true
        mode: "external"
        # resources: {}
      # edgeService:
      #   resources: {}
  gatekeeper:
    enabled: false
    # controller_manager:
    #   resources: {}
    # audit:
    #   resources: {}
  terminal:
    timeout: 600


  1. I have already package the tar.gz all in one
  2. I have try the installation in ubuntu22.04 ubuntu20.04 / amd64 / Linux, it success
  3. The bug occurs in Operating system银河麒麟ServerV10: https://product.kylinos.cn/productCase/42/25 [root@k8s-master /]# uname -a Linux k8s-master 4.19.90-89.11.v2401.ky10.x86_64 #1 SMP Tue May 7 18:33:01 CST 2024 x86_64 x86_64 x86_64 GNU/Linux

How To Reproduce Steps to reproduce the behavior:

  1. ksp-v3.4.1-v1.29-artifact.tar.gz is a offline tested package file
  2. create the config yaml file by kuebkey
./kk create config --with-kubesphere v3.4.1 --with-kubernetes v1.29.3 -f ksp-v1293-offline.yaml
  1. install by yum in k8s-master k8s-worker-1 k8s-worker-2
sudo apt install socat ebtables conntrack ipset ipvsadm 
  1. execute the command in master node
./kk create cluster -f ksp-v1293-offline.yaml -a ksp-v3.4.1-v1.29-artifact.tar.gz  --skip-push-images --debug

Expected behavior A clear and concise description of what you expected to happen.

Juminiy avatar Aug 07 '24 09:08 Juminiy