k8up icon indicating copy to clipboard operation
k8up copied to clipboard

Support for Multiple Backups in a Namespace

Open tobru opened this issue 3 years ago • 8 comments

Summary

As user of K8up I want to be able to specify multiple different backups per namespace So that I can have different settings for different kind of backups.

Context

To be able to specify different backups settings for different backup targets in the same namespace. For example a DB backup which runs every night and a PVC backup which runs every hour.

Out of Scope

Further links

Acceptance criteria

Given a namespace When multiple backups are specified with selectors Then they select the PVCs or Pods to back up

Implementation Ideas

Implement a Pod/PVC selector to make it possible to have multiple backup objects in a single namespace. Instead of selecting all PVCs with RWX and all Pods with an annotation, specify a selector for Pods and PVCs to be backed up. Also make sure naming doesn't collide for Prometheus metrics, Restic repos and backup names. By specifying an empty selector (select all), the old behavior can be maintained.

tobru avatar Jan 21 '21 16:01 tobru

https://github.com/projectsyn/component-cluster-backup/issues/1 asks for this feature.

tobru avatar Jan 22 '21 07:01 tobru

We'd also need this feature, as we have different schedules and also backends(!) for various components in the same namespace

cwrau avatar Mar 01 '22 13:03 cwrau

I recently faced a similar problem https://github.com/k8up-io/k8up/issues/648

As a solution, it is proposed to run all backups from the root user. But this is not always possible and certainly not secure.

Within the same namespace, different workloads can write under different UIDs. Therefore, we need the ability to run backup processes also with different UIDs.

R-omk avatar Aug 17 '23 15:08 R-omk

Is this planned at some point?

cwrau avatar Oct 19 '23 07:10 cwrau

This is the only thing preventing us from migrating from velero, is there an update on this?

cwrau avatar Dec 11 '23 10:12 cwrau

It would be really helpful to be able to annotate PVCs somehow with some kind of k8up.io/backup=that-app-schedule which excludes them from all backups except from that schedule.

toabi avatar Feb 02 '24 07:02 toabi

It would probably be best to have a label selector field to spec.backup for the Backup CRD, so that you could allow for selecting any PVC with multiple specific labels, but I also like the idea in https://github.com/k8up-io/k8up/issues/316#issuecomment-1923210945 too though, as then when looking at the PVC, you'd know which backup schedule it falls under, instead of having to cross reference the backup with PVC labels it queries.

I looks like this issue was added to the k8up v3 milestone and also planned section of roadmap, however, I did some reading in the operator config docs and it does show a backup annotation to check in the help (and in the code):

--annotation value   the annotation to be used for filtering (default: "k8up.io/backup") [$BACKUP_ANNOTATION]

If I'm understanding correctly, it should work if you use the BACKUP_ANNOTATION env var, however, when I tried to specify it on a per backup basis like in the example below, it didn't work :( (it resulted in still backing up everything in the namespace instead of ignoring those with annotation custom annotation set to false and backing up those with custom annotation set to true)

tested configmap with env:

apiVersion: v1
kind: ConfigMap
metadata:
  name: remote-backup-env
data:
  BACKUP_ANNOTATION: "k8up.io/remote-backup"

tested backup example:

apiVersion: k8up.io/v1
kind: Backup
metadata:
  name: remote-only-backup
  namespace: my-namespace
spec:
  failedJobsHistoryLimit: 2
  promURL: push-gateway.prometheus.svc:9091
  successfulJobsHistoryLimit: 2
  backend:
    envFrom:
      - configMapRef:
          name: remote-backup-env
    repoPasswordSecretRef:
      key: resticRepoPassword
      name: s3-credentials
    s3:
      accessKeyIDSecretRef:
        key: accessKeyID
        name: s3-credentials
        optional: false
      bucket: my-bucket
      endpoint: mys3.endpoint.anonymized
      secretAccessKeySecretRef:
        key: secretAccessKey
        name: s3-credentials
        optional: false

When I exec into the backup pod, and run env, I can see it got the correct env var:

BACKUP_ANNOTATION=k8up.io/remote-backup

The way I annotated my PVCs before applying the backup is like so:

kubectl annotate pvc my-not-ignored-pvc k8up.io/remote-backup='false'
kubectl annotate pvc my-pvc-i-want-to-backup-remotely k8up.io/remote-backup='true'

# to be sure, I also annotated my associated pods
kubectl annotate pod my-not-ignored-pod-8dsu1 k8up.io/remote-backup='false'
kubectl annotate pod my-pod-i-want-to-backup-remotely-fc6ve k8up.io/remote-backup='true'

So based on all of that, I feel like this is partially implemented if we were ok with just using annotations, however, I'm unsure why I can't use a different backup annotation for different Backups/Schedules.

If I'm doing something wrong, please let me know, otherwise perhaps someone needs to look at why BACKUP_ANNOTATION can't be set for specific Backups.

jessebot avatar Apr 30 '24 09:04 jessebot