goldpinger icon indicating copy to clipboard operation
goldpinger copied to clipboard

Added probing support for multiple K8S clusters

Open ZYao123 opened this issue 2 years ago • 1 comments

Issue number of the reported bug or feature request: #

Describe your changes

KUBECONFIG can be set as a directory
Goldpinger will read all the K8S files in the directory and probe the network through the exposed ports update go version to 1.19
format code

Testing performed

I added environment variables:

            - name: PORT
              value: "8080"
            - name: KUBECONFIG
              value: "/root/k8s"
            - name: CLIENT_PORT_OVERRIDE
              value: "30081"
            - name: USE_HOST_IP
              value: "true"
            - name: HOSTNAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: DISPLAY_NODENAME
              value: "true"

Mount k8s config under the /root/k8s directory and you will find that Goldpinger can perform network checks in multiple K8S clusters

Additional context
k8s file:

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: goldpinger
  namespace: alioth
  labels:
    app: goldpinger
spec:
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app: goldpinger
  template:
    metadata:
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "8080"
      labels:
        app: goldpinger
    spec:
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      securityContext:
        runAsNonRoot: true
        runAsUser: 1000
        fsGroup: 2000
      containers:
        - name: goldpinger
          env:
            - name: PORT
              value: "8080"
            - name: KUBECONFIG
              value: "/root/k8s"
            - name: CLIENT_PORT_OVERRIDE
              value: "30081"
            - name: USE_HOST_IP
              value: "true"
            - name: HOSTNAME
              valueFrom:
                fieldRef:
                  fieldPath: spec.nodeName
            - name: DISPLAY_NODENAME
              value: "true"
            - name: POD_IP
              valueFrom:
                fieldRef:
                  fieldPath: status.podIP
          image: goldpinger:v3.4.0
          imagePullPolicy: Always
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
          resources:
            limits:
              memory: 4000Mi
            requests:
              cpu: 1m
              memory: 40Mi
          ports:
            - containerPort: 8080
              name: http
          volumeMounts:
            - mountPath: /root/k8s/aaa
              name: config
              subPath: aaa
            - mountPath: /root/k8s/bbb
              name: config
              subPath: bbb
          readinessProbe:
            httpGet:
              path: /healthz
              port: 8080
            initialDelaySeconds: 20
            periodSeconds: 5
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
            initialDelaySeconds: 20
            periodSeconds: 5
      volumes:
        - name: config
          configMap:
            defaultMode: 420
            name: goldpinger-configmap
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: goldpinger-configmap
  namespace: alioth
  labels:
    app: goldpinger
data:
  aaa: |
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: **************
        server: **************
      name: cluster.local
    contexts:
    - context:
        cluster: cluster.local
        user: kubernetes-admin
      name: [email protected]
    current-context: [email protected]
    kind: Config
    preferences: {}
    users:
    - name: kubernetes-admin
      user:
        client-certificate-data: **************
        client-key-data: **************
  bbb: |
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: **************
        server: **************
      name: cluster.local
    contexts:
    - context:
        cluster: cluster.local
        user: kubernetes-admin
      name: [email protected]
    current-context: [email protected]
    kind: Config
    preferences: {}
    users:
    - name: kubernetes-admin
      user:
        client-certificate-data: **************
        client-key-data: **************
---
apiVersion: v1
kind: Service
metadata:
  name: goldpinger
  namespace: alioth
  labels:
    app: goldpinger
spec:
  type: NodePort
  ports:
    - port: 8080
      targetPort: 8080
      nodePort: 30081
      name: http
  selector:
    app: goldpinger

ZYao123 avatar Sep 28 '22 08:09 ZYao123

Hey @ZYao123 thanks for the commit, this is a pretty cool feature!

Before I can approve, I'll need you to force push the commit with -s, like so:

Ensure you have a local copy of your branch by [checking out the pull request locally via command line](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/checking-out-pull-requests-locally).
In your local branch, run: git rebase HEAD~1 --signoff
Force push your changes to overwrite the branch: git push --force-with-lease origin mulit_k8s

seeker89 avatar Oct 05 '22 19:10 seeker89

Hey @ZYao123 thanks for the commit, this is a pretty cool feature!

Before I can approve, I'll need you to force push the commit with -s, like so:

Ensure you have a local copy of your branch by [checking out the pull request locally via command line](https://help.github.com/en/github/collaborating-with-issues-and-pull-requests/checking-out-pull-requests-locally).
In your local branch, run: git rebase HEAD~1 --signoff
Force push your changes to overwrite the branch: git push --force-with-lease origin mulit_k8s

may I did the right thing?

ZYao123 avatar Oct 19 '22 07:10 ZYao123

@ZYao123 yes, the DCO is happy!

However the CI doesn't build - would you mind fixing this? Thanks!

seeker89 avatar Oct 24 '22 20:10 seeker89

@ZYao123 yes, the DCO is happy!

However the CI doesn't build - would you mind fixing this? Thanks!

I run go mod tidy, hope to fix it, can you try again? i think it's ok

ZYao123 avatar Oct 25 '22 04:10 ZYao123