kusion icon indicating copy to clipboard operation
kusion copied to clipboard

Kubernetes deployment

Open bobdivx opened this issue 10 months ago • 9 comments

Hi I am desperately trying to deploy to my cluster, however I keep getting this error in the logs:

I0121 00:44:15.473112 1 serving.go:342] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)

E0121 00:44:15.473519 1 run.go:74] "command failed" err="no valid serviceaccounts signing key file"

I think I have put all the secrets and services well

Thanks in advance!

bobdivx avatar Jan 21 '25 00:01 bobdivx

Hey @bobdivx, would you mind sharing what commands you are using to install and what your environments look like? And preferably also steps to reproduce the issue?

kusionstack-bot avatar Jan 21 '25 02:01 kusionstack-bot

@bobdivx May I ask if you encountered any issues during the deployment of Karpor?

elliotxx avatar Jan 21 '25 07:01 elliotxx

@bobdivx May I ask if you encountered any issues during the deployment of Karpor?

That's to say ? 2 of the replicas are in crashloopbackoff and one is pending

bobdivx avatar Jan 21 '25 07:01 bobdivx

Hey @bobdivx, would you mind sharing what commands you are using to install and what your environments look like? And preferably also steps to reproduce the issue?

Deployment.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: karpor-sa
  namespace: monitoring
  annotations:
    automountServiceAccountToken: "true"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: karpor
  namespace: monitoring
  labels:
    app: karpor
spec:
  replicas: 2
  selector:
    matchLabels:
      app: karpor
  template:
    metadata:
      labels:
        app: karpor
    spec:
      serviceAccountName: karpor-sa
      containers:
        - name: karpor
          image: kusionstack/karpor:latest
          ports:
            - containerPort: 8080
          env:
            - name: KAPOR_NAMESPACE
              value: monitoring
            - name: KAPOR_CONFIG_PATH
              value: /etc/karpor/config.yaml
            - name: KUBERNETES_SERVICE_TOKEN
              valueFrom:
                secretKeyRef:
                  name: karpor-token
                  key: token
          volumeMounts:
            - name: karpor-config
              mountPath: /etc/karpor/config.yaml
              subPath: config.yaml
            - name: karpor-data
              mountPath: /var/lib/karpor
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
            initialDelaySeconds: 10
            periodSeconds: 5
          readinessProbe:
            httpGet:
              path: /readyz
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 5
          resources:
            requests:
              memory: "256Mi"
              cpu: "500m"
            limits:
              memory: "512Mi"
              cpu: "1"
      volumes:
        - name: karpor-config
          configMap:
            name: karpor-config
        - name: karpor-data
          persistentVolumeClaim:
            claimName: karpor-data-data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: karpor-data
  namespace: monitoring
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
  name: karpor
  namespace: monitoring
spec:
  selector:
    app: karpor
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  type: ClusterIP
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: karpor-config
  namespace: monitoring
data:
  config.yaml: |
    apiVersion: v1
    kind: Configuration
    metadata:
      description: "Api server"
    data:
      apiServer: "https://kubernetes.default.svc.cluster.local

Service.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: karpor-sa
  namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: karpor-role
  namespace: monitoring
rules:
  - apiGroups: [""]
    resources: ["secrets"]
    verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: karpor-binding
  namespace: monitoring
subjects:
  - kind: ServiceAccount
    name: karpor-sa
    namespace: monitoring
roleRef:
  kind: Role
  name: karpor-role
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: v1
kind: Secret
metadata:
  name: karpor-token
  namespace: monitoring
  annotations:
    kubernetes.io/service-account.name: karpor-sa
type: kubernetes.io/service-account-token

I tried to force a token:

apiVersion: v1
kind: Secret
metadata:
  name: karpor-token
  namespace: monitoring
type: Opaque
data:
  token: #########

Configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
  name: karpor-kubeconfig
  namespace: monitoring
data:
  kubeconfig: |
    apiVersion: v1
    kind: Config
    clusters:
    - name: fr1.briseteia.me
      cluster:
        server: https://127.0.0.1:7443
        certificate-authority-data: ###
      users:
    - name: [email protected]
      user:
        client-certificate-data: ###
        client-key-data: ###
     contexts:
    - context:
        cluster: fr1.briseteia.me
        namespace: default
        user: admin@fr1.###.me
      name: admin@fr1.###.me
    current-context: admin@fr1.###.me

bobdivx avatar Jan 21 '25 07:01 bobdivx

@bobdivx Would you mind providing your install tool? Like helm install karpor kusionstack/karpor? There may be some helm values that karpor doesn't handle rightly.

And it would be best if you could provide your Kubernetes version too, since Karpor's Helm charts currently do not support very old versions of Kubernetes(less than 1.24). However, you can still install Karpor on a newer Kubernetes cluster and then use it to manage these older clusters.

Finally, thank you for trying Karpor and providing feedbacks.

ruquanzhao avatar Jan 22 '25 06:01 ruquanzhao

I managed to move forward, here are the logs now: I0122 07:54:57.519492 1 serving.go:342] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)

E0122 07:54:57.519930 1 run.go:74] "command failed" err="no valid serviceaccounts signing key file"

My deployment file:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: karpor
  namespace: monitoring
  labels:
    app: karpor
spec:
  replicas: 2
  selector:
    matchLabels:
      app: karpor
  template:
    metadata:
      labels:
        app: karpor
    spec:
      serviceAccountName: karpor-sa
      volumes:
        - name: kube-api-access
          projected:
            sources:
              - serviceAccountToken:
                  expirationSeconds: 3607
                  path: token
              - configMap:
                  name: kube-root-ca.crt
                  items:
                    - key: ca.crt
                      path: ca.crt
              - downwardAPI:
                  items:
                    - path: namespace
                      fieldRef:
                        apiVersion: v1
                        fieldPath: metadata.namespace
        - name: karpor-config
          configMap:
            name: karpor-config
        - name: karpor-data
          persistentVolumeClaim:
            claimName: karpor-data
      containers:
        - name: karpor
          image: kusionstack/karpor:latest
          ports:
            - containerPort: 8080
          env:
            - name: KAPOR_NAMESPACE
              value: monitoring
            - name: KAPOR_CONFIG_PATH
              value: /etc/karpor/config.yaml
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  apiVersion: v1
                  fieldPath: metadata.namespace
          volumeMounts:
            - name: kube-api-access
              readOnly: true
              mountPath: /var/run/secrets/kubernetes.io/serviceaccount
            - name: karpor-config
              mountPath: /etc/karpor/config.yaml
              subPath: config.yaml
            - name: karpor-data
              mountPath: /var/lib/karpor
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
            initialDelaySeconds: 10
            periodSeconds: 5
          readinessProbe:
            httpGet:
              path: /readyz
              port: 8080
            initialDelaySeconds: 5
            periodSeconds: 5
          resources:
            requests:
              memory: "256Mi"
              cpu: "500m"
            limits:
              memory: "512Mi"
              cpu: "1"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: karpor-data
  namespace: monitoring
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
---
apiVersion: v1
kind: Service
metadata:
  name: karpor
  namespace: monitoring
spec:
  selector:
    app: karpor
  ports:
    - protocol: TCP
      port: 80
      targetPort: 8080
  type: ClusterIP
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: karpor-config
  namespace: monitoring
data:
  config.yaml: |
    apiVersion: v1
    kind: Configuration
    metadata:
      description: "Configuration spécifique à Karpor"
    data:
      apiServer: "https://kubernetes.default.svc.cluster.local"
      example_key: "example_value"
      another_key: "another_value"

bobdivx avatar Jan 22 '25 08:01 bobdivx

@bobdivx Hi, may I ask how you used helm installation? Can you provide your helm install command? The latest version of Karpor's default startup mode no longer requires RBAC, and this error is somewhat strange. :(

Also, can we view the status and logs of all pods in your namespace, which can help us troubleshoot the issue.

kubectl -n monitor get pod

elliotxx avatar Jan 23 '25 06:01 elliotxx

hi,

extrait gotk-components.yaml

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: flux-system
    app.kubernetes.io/part-of: flux
    app.kubernetes.io/version: v2.4.0
  name: cluster-reconciler-flux-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kustomize-controller
  namespace: flux-system
- kind: ServiceAccount
  name: helm-controller
  namespace: flux-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: flux-system
    app.kubernetes.io/part-of: flux
    app.kubernetes.io/version: v2.4.0
  name: crd-controller-flux-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: crd-controller-flux-system
subjects:
- kind: ServiceAccount
  name: kustomize-controller
  namespace: flux-system
- kind: ServiceAccount
  name: helm-controller
  namespace: flux-system
- kind: ServiceAccount
  name: source-controller
  namespace: flux-system
- kind: ServiceAccount
  name: notification-controller
  namespace: flux-system
- kind: ServiceAccount
  name: image-reflector-controller
  namespace: flux-system
- kind: ServiceAccount
  name: image-automation-controller
  namespace: flux-system
---

flux.yaml

metadata:
  name: flux-system
  namespace: flux-system
  labels:
    app.kubernetes.io/part-of: flux
    app.kubernetes.io/component: monitoring
spec:
  namespaceSelector:
    matchNames:
      - flux-system
  selector:
    matchExpressions:
      - key: app
        operator: In
        values:
          - helm-controller
          - source-controller
          - kustomize-controller
          - notification-controller
          - image-automation-controller
          - image-reflector-controller
  podMetricsEndpoints:
    - port: http-prom
      relabelings:
        # https://github.com/prometheus-operator/prometheus-operator/issues/4816
        - sourceLabels: [__meta_kubernetes_pod_phase]
          action: keep
          regex: Running

Karpor logs:

I0123 07:23:48.502430       1 serving.go:342] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
E0123 07:23:48.502792       1 run.go:74] "command failed" err="no valid serviceaccounts signing key file"

kubectl -n monitor get pod

kubectl -n monitoring get pod
NAME                                                              READY   STATUS             RESTARTS          AGE
karpor-55779fd66c-svjm8                                           0/1     CrashLoopBackOff   279 (3m17s ago)   23h
karpor-5578d88f86-6t2th                                           0/1     Pending            0                 23h
karpor-847ffc8474-49vl5                                           0/1     Pending            0                 23h

bobdivx avatar Jan 23 '25 07:01 bobdivx

Hi I am desperately trying to deploy to my cluster, however I keep getting this error in the logs:

I0121 00:44:15.473112 1 serving.go:342] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)

E0121 00:44:15.473519 1 run.go:74] "command failed" err="no valid serviceaccounts signing key file"

I think I have put all the secrets and services well

Thanks in advance!

I'm not sure how you deployed the kusion server, should you use our official helm chart for deployment and have any problems, please let me know. You may follow the instructions in our docs

Yangyang96 avatar Jan 23 '25 09:01 Yangyang96