compliantkubernetes-apps icon indicating copy to clipboard operation
compliantkubernetes-apps copied to clipboard

Create release Compliant Kubernetes Apps v0.41.0

Open lunkan93 opened this issue 5 months ago • 0 comments

Overview

[!note] Whenever you need to change access from operator admin to [email protected] prefer to re-login by clearing the ~/.kube/cache/oidc-login cache instead of impersonation [email protected].

  • Pre-QA steps
  • Install QA steps
  • Upgrade QA steps
  • Post-QA steps
  • Release steps

# Pre-QA steps

# Install QA steps

Apps install scenario

Infrastructure provider

  • [x] Azure
  • [ ] Elastx
  • [ ] Safespring
  • [ ] UpCloud

Configuration

  • [x] Flavor - Prod

  • [x] Dex IdP - Google

  • [x] Dex Static User - Enabled and [email protected] added as an application developer

    Commands
    # configure
    yq4 -i '.grafana.user.oidc.allowedDomains += ["example.com"]' "${CK8S_CONFIG_PATH}/sc-config.yaml"
    yq4 -i 'with(.opensearch.extraRoleMappings[]; with(select(.mapping_name != "all_access"); .definition.users += ["[email protected]"]))' "${CK8S_CONFIG_PATH}/sc-config.yaml"
    yq4 -i '.user.adminUsers += ["[email protected]"]' "${CK8S_CONFIG_PATH}/wc-config.yaml"
    yq4 -i '.dex.enableStaticLogin = true' "${CK8S_CONFIG_PATH}/sc-config.yaml"
    
    # apply
    ./bin/ck8s apply sc
    ./bin/ck8s apply wc
    
  • [x] Rclone sync - Enabled and preferably configured to a different infrastructure provider.

  • [x] Set the environment variable NAMESPACE to an application developer namespace (this cannot be a subnamespace)

  • [x] Set the environment variable DOMAIN to the environment domain

Automated tests

[!note] As platform administrator

  • [x] Successful ./bin/ck8s test sc|wc
  • [ ] From tests/ successful make build-main
  • [ ] From tests/ successful make run-end-to-end

Kubernetes access

[!note] As platform administrator

  • [x] Can login as platform administrator via Dex with IdP

[!note] As application developer [email protected]

  • [x] Can login as application developer [email protected] via Dex with static user

  • [x] Can list access

    kubectl -n "${NAMESPACE}" auth can-i --list
    
  • [x] Can delegate admin access

    $ kubectl -n "${NAMESPACE}" edit rolebinding extra-workload-admins
      # Add some subject
      subjects:
        # You can specify more than one "subject"
        - kind: User
          name: jane # "name" is case sensitive
          apiGroup: rbac.authorization.k8s.io
    
  • [x] Can delegate view access

    $ kubectl edit clusterrolebinding extra-user-view
      # Add some subject
      subjects:
        # You can specify more than one "subject"
        - kind: User
          name: jane # "name" is case sensitive
          apiGroup: rbac.authorization.k8s.io
    
  • [x] Cannot run with root by default

    kubectl apply -n "${NAMESPACE}" -f - <<EOF
    ---
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-root-nginx
    spec:
      podSelector:
        matchLabels:
          app: root-nginx
      policyTypes:
        - Ingress
        - Egress
      ingress:
        - {}
      egress:
        - {}
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      labels:
        app: root-nginx
      name: root-nginx
    spec:
      restartPolicy: Never
      containers:
        - name: nginx
          image: nginx:stable
          resources:
            requests:
              memory: 64Mi
              cpu: 250m
            limits:
              memory: 128Mi
              cpu: 500m
    EOF
    

Hierarchical Namespaces

[!note] As application developer [email protected]

  • [x] Can create a subnamespace by following the application developer docs

    Commands
    kubectl apply -n "${NAMESPACE}" -f - <<EOF
    apiVersion: hnc.x-k8s.io/v1alpha2
    kind: SubnamespaceAnchor
    metadata:
      name: ${NAMESPACE}-qa-test
    EOF
    
    kubectl get ns "${NAMESPACE}-qa-test"
    
    kubectl get subns -n "${NAMESPACE}" "${NAMESPACE}-qa-test" -o yaml
    
  • [x] Ensure the default roles, rolebindings, and networkpolicies propagated

    Commands
    kubectl get role,rolebinding,netpol -n "${NAMESPACE}"
    kubectl get role,rolebinding,netpol -n "${NAMESPACE}-qa-test"
    

Harbor

[!note] As application developer [email protected]

Gatekeeper

[!note] As application developer [email protected]

  • [x] Can list OPA rules

    kubectl get constraints
    

[!note] Using the user demo helm chart

Set NAMESPACE to an application developer namespaces Set PUBLIC_DOCS_PATH to the path of the public docs repo

  • [x] With invalid image repository, try to deploy, should warn due to constraint

    helm -n "${NAMESPACE}" upgrade --atomic --install demo "${PUBLIC_DOCS_PATH}/user-demo/deploy/ck8s-user-demo" \
        --set image.repository="${REGISTRY_PROJECT}/ck8s-user-demo" \
        --set image.tag="${TAG}" \
        --set ingress.hostname="demoapp.${DOMAIN}"
    
  • [x] With invalid image tag, try to deploy, should fail due to constraint

    helm -n "${NAMESPACE}" upgrade --atomic --install demo "${PUBLIC_DOCS_PATH}/user-demo/deploy/ck8s-user-demo" \
        --set image.repository="harbor.${DOMAIN}/${REGISTRY_PROJECT}/ck8s-user-demo" \
        --set image.tag=latest \
        --set ingress.hostname="demoapp.${DOMAIN}"
    
  • [x] With unset networkpolicies, try to deploy, should warn due to constraint

    helm -n "${NAMESPACE}" upgrade --atomic --install demo "${PUBLIC_DOCS_PATH}/user-demo/deploy/ck8s-user-demo" \
        --set image.repository="harbor.${DOMAIN}/${REGISTRY_PROJECT}/ck8s-user-demo" \
        --set image.tag="${TAG}" \
        --set ingress.hostname="demoapp.${DOMAIN}" \
        --set networkPolicy.enabled=false
    
  • [x] With unset resources, try to deploy, should fail due to constraint

    helm -n "${NAMESPACE}" upgrade --atomic --install demo "${PUBLIC_DOCS_PATH}/user-demo/deploy/ck8s-user-demo" \
        --set image.repository="harbor.${DOMAIN}/${REGISTRY_PROJECT}/ck8s-user-demo" \
        --set image.tag="${TAG}" \
        --set ingress.hostname="demoapp.${DOMAIN}" \
        --set resources.requests=null
    
  • [x] With valid values, try to deploy, should succeed

    helm -n "${NAMESPACE}" upgrade --atomic --install demo "${PUBLIC_DOCS_PATH}/user-demo/deploy/ck8s-user-demo" \
        --set image.repository="harbor.${DOMAIN}/${REGISTRY_PROJECT}/ck8s-user-demo" \
        --set image.tag="${TAG}" \
        --set ingress.hostname="demoapp.${DOMAIN}"
    

cert-manager and ingress-nginx

[!note] As platform administrator

  • [x] All certificates ready including user demo
  • [x] All ingresses ready including user demo
    • [x] Endpoints are reachable
    • [x] Status includes correct IP addresses

Metrics

[!note] As platform administrator

  • [x] Can login to platform administrator Grafana via Dex with IdP
  • [x] Dashboards are available and viewable
  • [x] Metrics are available from all clusters

[!note] As application developer [email protected]

Alerts

[!note] As platform administrator

  • [ ] No alert open except Watchdog, CPUThrottlingHigh and FalcoAlert
    • Can be seen in the alert section in platform administrator Grafana

[!note] As application developer [email protected]

Logs

[!note] As platform administrator

  • [x] Can login to OpenSearch Dashboards via Dex with IdP
  • [x] Indices created (authlog, kubeaudit, kubernetes, other)
  • [x] Indices managed (authlog, kubeaudit, kubernetes, other)
  • [x] Logs available (authlog, kubeaudit, kubernetes, other)
  • [x] Snapshots configured

[!note] As application developer [email protected]

  • [x] Can login to OpenSearch Dashboards via Dex with static user
  • [x] Welcome dashboard presented first
  • [x] Logs available (kubeaudit, kubernetes)
  • [x] CISO dashboards available and working

Falco

[!note] As platform administrator

  • [x] Deploy the falcosecurity/event-generator to generate events in wc

    Commands
    # Install
    
    kubectl create namespace event-generator
    kubectl label namespace event-generator owner=operator
    
    helm repo add falcosecurity https://falcosecurity.github.io/charts
    helm repo update
    
    helm -n event-generator install event-generator falcosecurity/event-generator \
        --set securityContext.runAsNonRoot=true \
        --set securityContext.runAsGroup=65534 \
        --set securityContext.runAsUser=65534 \
        --set podSecurityContext.fsGroup=65534 \
        --set config.actions=""
    
    # Uninstall
    
    helm -n event-generator uninstall event-generator
    kubectl delete namespace event-generator
    
  • [x] Logs are available in OpenSearch Dashboards

  • [x] Logs are relevant

Network policies

  • [x] No dropped packets in NetworkPolicy Grafana dashboard

Take backups and snapshots

[!note] As platform administrator

Prepare items to test disaster recovery:

  • [ ] Login to Harbor and create a project and robot account:

    xdg-open "https://harbor.${DOMAIN}"
    
  • [ ] Login to Harbor with your access token:

    docker login "harbor.${DOMAIN}"
    
  • [ ] Set the environment variable REGISTRY_PROJECT to the name of the created project

  • [ ] Push the image ghcr.io/elastisys/curl-jq:1.0.0 to the created project

    docker pull "ghcr.io/elastisys/curl-jq:1.0.0"
    docker tag "ghcr.io/elastisys/curl-jq:1.0.0" "harbor.${DOMAIN}/${REGISTRY_PROJECT}/curl-jq:1.0.0"
    docker push "harbor.${DOMAIN}/${REGISTRY_PROJECT}/curl-jq:1.0.0"
    
  • [ ] Create an image pull secret following the application developer docs

  • [ ] Deploy a Pod with a PersistantVolume on the workload cluster:

    Commands
    kubectl apply -n "${NAMESPACE}" -f - <<EOF
    ---
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-all
    spec:
      podSelector: {}
      policyTypes:
        - Ingress
        - Egress
      ingress:
        - {}
      egress:
        - {}
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: velero-app-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: velero-app
    spec:
      restartPolicy: Never
      imagePullSecrets:
        - name: pull-secret
      containers:
        - name: read
          image: harbor.${DOMAIN}/${REGISTRY_PROJECT}/curl-jq:1.0.0
          command: ['sh', '-c', 'while true; do tail /pod-data/file.log && sleep 1800; done']
          resources:
            requests:
              memory: 64Mi
              cpu: 250m
            limits:
              memory: 128Mi
              cpu: 500m
          volumeMounts:
            - name: shared-data
              mountPath: /pod-data
        - name: write
          image: harbor.${DOMAIN}/${REGISTRY_PROJECT}/curl-jq:1.0.0
          command: ['sh', '-c', 'while true; do echo "$(date +%F_%T) - Hello, Kubernetes!" >> /pod-data/file.log && sleep 1800; done']
          resources:
            requests:
              memory: 64Mi
              cpu: 250m
            limits:
              memory: 128Mi
              cpu: 500m
          volumeMounts:
            - name: shared-data
              mountPath: /pod-data
      securityContext:
        runAsUser: 999
      volumes:
        - name: shared-data
          persistentVolumeClaim:
            claimName: velero-app-pvc
    EOF
    

Follow the public disaster recovery documentation to take backups:

  • [ ] Can take Harbor backup

  • [ ] Can take OpenSearch snapshot

  • [ ] Can take Velero snapshot

  • [ ] Can run Rclone sync:

    # create rclone sync jobs for all cronjobs:
    for cronjob in $(./bin/ck8s ops kubectl sc -n rclone get cronjobs -lapp.kubernetes.io/instance=rclone-sync -oname); do
      ./bin/ck8s ops kubectl sc -n rclone create job --from "${cronjob}" "${cronjob/#cronjob.batch\/}"
    done
    
    # wait for rclone sync jobs to finish
    ./bin/ck8s ops kubectl sc -n rclone get pods -lapp.kubernetes.io/instance=rclone-sync -w
    

Restore backups and snapshots

[!note] As platform administrator

Follow the public disaster recovery documentation to perform restores from the prepared backups:

# Upgrade QA steps

Apps upgrade scenario

[!note] The upgrade is done as part of the checklist.

Infrastructure provider

  • [ ] Azure
  • [ ] Elastx
  • [x] Safespring
  • [ ] UpCloud

Configuration

  • [x] Flavor - Prod

  • [x] Dex IdP - Google

  • [x] Dex Static User - Enabled and [email protected] added as an application developer

    Commands
    # configure
    yq4 -i '.grafana.user.oidc.allowedDomains += ["example.com"]' "${CK8S_CONFIG_PATH}/sc-config.yaml"
    yq4 -i 'with(.opensearch.extraRoleMappings[]; with(select(.mapping_name != "all_access"); .definition.users += ["[email protected]"]))' "${CK8S_CONFIG_PATH}/sc-config.yaml"
    yq4 -i '.user.adminUsers += ["[email protected]"]' "${CK8S_CONFIG_PATH}/wc-config.yaml"
    yq4 -i '.dex.enableStaticLogin = true' "${CK8S_CONFIG_PATH}/sc-config.yaml"
    
    # apply
    ./bin/ck8s apply sc
    ./bin/ck8s apply wc
    
  • [x] Rclone sync - Enabled and preferably configured to a different infrastructure provider.

  • [x] Set the environment variable NAMESPACE to an application developer namespace (this cannot be a subnamespace)

  • [x] Set the environment variable DOMAIN to the environment domain

Take backups and snapshots

[!note] As platform administrator

Prepare items to test disaster recovery:

  • [x] Login to Harbor and create a project and robot account:

    xdg-open "https://harbor.${DOMAIN}"
    
  • [x] Login to Harbor with your access token:

    docker login "harbor.${DOMAIN}"
    
  • [x] Set the environment variable REGISTRY_PROJECT to the name of the created project

  • [x] Push the image ghcr.io/elastisys/curl-jq:1.0.0 to the created project

    docker pull "ghcr.io/elastisys/curl-jq:1.0.0"
    docker tag "ghcr.io/elastisys/curl-jq:1.0.0" "harbor.${DOMAIN}/${REGISTRY_PROJECT}/curl-jq:1.0.0"
    docker push "harbor.${DOMAIN}/${REGISTRY_PROJECT}/curl-jq:1.0.0"
    
  • [x] Create an image pull secret following the application developer docs

  • [x] Deploy a Pod with a PersistantVolume on the workload cluster:

    Commands
    kubectl apply -n "${NAMESPACE}" -f - <<EOF
    ---
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-all
    spec:
      podSelector: {}
      policyTypes:
        - Ingress
        - Egress
      ingress:
        - {}
      egress:
        - {}
    ---
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: velero-app-pvc
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: velero-app
    spec:
      restartPolicy: Never
      imagePullSecrets:
        - name: pull-secret
      containers:
        - name: read
          image: harbor.${DOMAIN}/${REGISTRY_PROJECT}/curl-jq:1.0.0
          command: ['sh', '-c', 'while true; do tail /pod-data/file.log && sleep 1800; done']
          resources:
            requests:
              memory: 64Mi
              cpu: 250m
            limits:
              memory: 128Mi
              cpu: 500m
          volumeMounts:
            - name: shared-data
              mountPath: /pod-data
        - name: write
          image: harbor.${DOMAIN}/${REGISTRY_PROJECT}/curl-jq:1.0.0
          command: ['sh', '-c', 'while true; do echo "$(date +%F_%T) - Hello, Kubernetes!" >> /pod-data/file.log && sleep 1800; done']
          resources:
            requests:
              memory: 64Mi
              cpu: 250m
            limits:
              memory: 128Mi
              cpu: 500m
          volumeMounts:
            - name: shared-data
              mountPath: /pod-data
      securityContext:
        runAsUser: 999
      volumes:
        - name: shared-data
          persistentVolumeClaim:
            claimName: velero-app-pvc
    EOF
    

Follow the public disaster recovery documentation to take backups:

  • [x] Can take Harbor backup

  • [x] Can take OpenSearch snapshot

  • [x] Can take Velero snapshot

  • [x] Can run Rclone sync:

    # create rclone sync jobs for all cronjobs:
    for cronjob in $(./bin/ck8s ops kubectl sc -n rclone get cronjobs -lapp.kubernetes.io/instance=rclone-sync -oname); do
      ./bin/ck8s ops kubectl sc -n rclone create job --from "${cronjob}" "${cronjob/#cronjob.batch\/}"
    done
    
    # wait for rclone sync jobs to finish
    ./bin/ck8s ops kubectl sc -n rclone get pods -lapp.kubernetes.io/instance=rclone-sync -w
    

Upgrade

Automated tests

[!note] As platform administrator

  • [x] Successful ./bin/ck8s test sc|wc
  • [x] From tests/ successful make build-main
  • [x] From tests/ successful make run-end-to-end

Kubernetes access

[!note] As platform administrator

  • [x] Can login as platform administrator via Dex with IdP

[!note] As application developer [email protected]

  • [x] Can login as application developer [email protected] via Dex with static user

  • [x] Can list access

    kubectl -n "${NAMESPACE}" auth can-i --list
    
  • [x] Can delegate admin access

    $ kubectl -n "${NAMESPACE}" edit rolebinding extra-workload-admins
      # Add some subject
      subjects:
        # You can specify more than one "subject"
        - kind: User
          name: jane # "name" is case sensitive
          apiGroup: rbac.authorization.k8s.io
    
  • [x] Can delegate view access

    $ kubectl edit clusterrolebinding extra-user-view
      # Add some subject
      subjects:
        # You can specify more than one "subject"
        - kind: User
          name: jane # "name" is case sensitive
          apiGroup: rbac.authorization.k8s.io
    
  • [x] Cannot run with root by default

    kubectl apply -n "${NAMESPACE}" -f - <<EOF
    ---
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: allow-root-nginx
    spec:
      podSelector:
        matchLabels:
          app: root-nginx
      policyTypes:
        - Ingress
        - Egress
      ingress:
        - {}
      egress:
        - {}
    ---
    apiVersion: v1
    kind: Pod
    metadata:
      labels:
        app: root-nginx
      name: root-nginx
    spec:
      restartPolicy: Never
      containers:
        - name: nginx
          image: nginx:stable
          resources:
            requests:
              memory: 64Mi
              cpu: 250m
            limits:
              memory: 128Mi
              cpu: 500m
    EOF
    

Hierarchical Namespaces

[!note] As application developer [email protected]

  • [x] Can create a subnamespace by following the application developer docs

    Commands
    kubectl apply -n "${NAMESPACE}" -f - <<EOF
    apiVersion: hnc.x-k8s.io/v1alpha2
    kind: SubnamespaceAnchor
    metadata:
      name: ${NAMESPACE}-qa-test
    EOF
    
    kubectl get ns "${NAMESPACE}-qa-test"
    
    kubectl get subns -n "${NAMESPACE}" "${NAMESPACE}-qa-test" -o yaml
    
  • [x] Ensure the default roles, rolebindings, and networkpolicies propagated

    Commands
    kubectl get role,rolebinding,netpol -n "${NAMESPACE}"
    kubectl get role,rolebinding,netpol -n "${NAMESPACE}-qa-test"
    

Harbor

[!note] As application developer [email protected]

Gatekeeper

[!note] As application developer [email protected]

  • [x] Can list OPA rules

    kubectl get constraints
    

[!note] Using the user demo helm chart

Set NAMESPACE to an application developer namespaces Set PUBLIC_DOCS_PATH to the path of the public docs repo

  • [x] With invalid image repository, try to deploy, should warn due to constraint

    helm -n "${NAMESPACE}" upgrade --atomic --install demo "${PUBLIC_DOCS_PATH}/user-demo/deploy/ck8s-user-demo" \
        --set image.repository="${REGISTRY_PROJECT}/ck8s-user-demo" \
        --set image.tag="${TAG}" \
        --set ingress.hostname="demoapp.${DOMAIN}"
    
  • [x] With invalid image tag, try to deploy, should fail due to constraint

    helm -n "${NAMESPACE}" upgrade --atomic --install demo "${PUBLIC_DOCS_PATH}/user-demo/deploy/ck8s-user-demo" \
        --set image.repository="harbor.${DOMAIN}/${REGISTRY_PROJECT}/ck8s-user-demo" \
        --set image.tag=latest \
        --set ingress.hostname="demoapp.${DOMAIN}"
    
  • [x] With unset networkpolicies, try to deploy, should warn due to constraint

    helm -n "${NAMESPACE}" upgrade --atomic --install demo "${PUBLIC_DOCS_PATH}/user-demo/deploy/ck8s-user-demo" \
        --set image.repository="harbor.${DOMAIN}/${REGISTRY_PROJECT}/ck8s-user-demo" \
        --set image.tag="${TAG}" \
        --set ingress.hostname="demoapp.${DOMAIN}" \
        --set networkPolicy.enabled=false
    
  • [x] With unset resources, try to deploy, should fail due to constraint

    helm -n "${NAMESPACE}" upgrade --atomic --install demo "${PUBLIC_DOCS_PATH}/user-demo/deploy/ck8s-user-demo" \
        --set image.repository="harbor.${DOMAIN}/${REGISTRY_PROJECT}/ck8s-user-demo" \
        --set image.tag="${TAG}" \
        --set ingress.hostname="demoapp.${DOMAIN}" \
        --set resources.requests=null
    
  • [x] With valid values, try to deploy, should succeed

    helm -n "${NAMESPACE}" upgrade --atomic --install demo "${PUBLIC_DOCS_PATH}/user-demo/deploy/ck8s-user-demo" \
        --set image.repository="harbor.${DOMAIN}/${REGISTRY_PROJECT}/ck8s-user-demo" \
        --set image.tag="${TAG}" \
        --set ingress.hostname="demoapp.${DOMAIN}"
    

cert-manager and ingress-nginx

[!note] As platform administrator

  • [x] All certificates ready including user demo
  • [x] All ingresses ready including user demo
    • [x] Endpoints are reachable
    • [x] Status includes correct IP addresses

Metrics

[!note] As platform administrator

  • [x] Can login to platform administrator Grafana via Dex with IdP
  • [x] Dashboards are available and viewable
  • [x] Metrics are available from all clusters

[!note] As application developer [email protected]

Alerts

[!note] As platform administrator

  • [x] No alert open except Watchdog, CPUThrottlingHigh and FalcoAlert
    • Can be seen in the alert section in platform administrator Grafana

[!note] As application developer [email protected]

Logs

[!note] As platform administrator

  • [x] Can login to OpenSearch Dashboards via Dex with IdP
  • [x] Indices created (authlog, kubeaudit, kubernetes, other)
  • [x] Indices managed (authlog, kubeaudit, kubernetes, other)
  • [x] Logs available (authlog, kubeaudit, kubernetes, other)
  • [x] Snapshots configured

[!note] As application developer [email protected]

  • [x] Can login to OpenSearch Dashboards via Dex with static user
  • [x] Welcome dashboard presented first
  • [x] Logs available (kubeaudit, kubernetes)
  • [x] CISO dashboards available and working

Falco

[!note] As platform administrator

  • [x] Deploy the falcosecurity/event-generator to generate events in wc

    Commands
    # Install
    
    kubectl create namespace event-generator
    kubectl label namespace event-generator owner=operator
    
    helm repo add falcosecurity https://falcosecurity.github.io/charts
    helm repo update
    
    helm -n event-generator install event-generator falcosecurity/event-generator \
        --set securityContext.runAsNonRoot=true \
        --set securityContext.runAsGroup=65534 \
        --set securityContext.runAsUser=65534 \
        --set podSecurityContext.fsGroup=65534 \
        --set config.actions=""
    
    # Uninstall
    
    helm -n event-generator uninstall event-generator
    kubectl delete namespace event-generator
    
  • [x] Logs are available in OpenSearch Dashboards

  • [x] Logs are relevant

Network policies

  • [x] No dropped packets in NetworkPolicy Grafana dashboard

Restore backups and snapshots

[!note] As platform administrator

Follow the public disaster recovery documentation to perform restores from the prepared backups:

# Post-QA steps

  • [x] Update the Welcoming Dashboards "What's New" section.

    Add items for new feature or changes that are relevant for application developers, e.g. for v0.25 "- As an application developer you can now create namespaces yourself using HNC ...".

    Remove items for releases older than two major or minor versions, e.g. for v0.25 you keep items for v0.25 and v0.24 and remove all items for all older versions.

    • Edit the Grafana dashboard
    • Edit the OpenSearch dashboard
  • [ ] Complete the code freeze step

  • [ ] Complete all post-QA steps in the internal checklist

# Release steps

lunkan93 avatar Sep 03 '24 07:09 lunkan93