helm-charts icon indicating copy to clipboard operation
helm-charts copied to clipboard

backupstoragelocation can't be completely configured via Helm

Open aceeric opened this issue 2 years ago • 7 comments

I'm installing velero using the helm chart https://github.com/vmware-tanzu/helm-charts/releases/tag/velero-2.23.3 with Minio as the backing S3 service.

The backupstoragelocation was never progressing to the Available phase and backups were failing. I looked at this documentation: https://velero.io/docs/v1.6/troubleshooting/#is-velero-using-the-correct-cloud-credentials, under the Troubleshooting BackupStorageLocation credentials header, regarding the .spec.credential.key and .spec.credential.name fields. So I hand-patched those into the backupstoragelocation in the cluster with values cloud and cloud-credentials respectively and suddenly everything worked. (I was already patching in the caCert field.)

Problem is, the helm chart does not appear to provide a way to do that. The backupstoragelocation.yaml in the templates directory and the values.yaml do not appear to have have a way to specify this, so it looks like I need to patch it after the chart deploys. Do you think I'm missing something?

Thanks.

aceeric avatar Aug 16 '21 13:08 aceeric

In general, when I deploy Velero, I'll prepare the credential-velero locally. And when helm install, specify the credential as

helm install velero \
   ...
   --set-file credentials.secretContents.cloud=credentials-velero \
   ---

Or, you could use the pre-existing secret key, you could use https://github.com/vmware-tanzu/helm-charts/blob/velero-2.23.3/charts/velero/values.yaml#L273.

Or, you could specify it in https://github.com/vmware-tanzu/helm-charts/blob/velero-2.23.3/charts/velero/values.yaml#L282-L287.

jenting avatar Sep 09 '21 01:09 jenting

I think what he's saying is, entries here https://github.com/vmware-tanzu/helm-charts/blob/velero-2.23.3/charts/velero/values.yaml#L273 don't get applied to https://github.com/vmware-tanzu/helm-charts/blob/velero-2.23.6/charts/velero/templates/backupstoragelocation.yaml . I have the credentials.existingSecret value set here https://github.com/jgilfoil/k8s-gitops/blob/main/cluster/apps/velero/helm-release.yaml#L46-L47, however the resulting object in my cluster doesn't get the credentials value applied:

vagrant@control:/code/k8s-gitops$ kubectl get backupstoragelocations -n velero default -o yaml
apiVersion: velero.io/v1
kind: BackupStorageLocation
metadata:
  annotations:
    helm.sh/hook: post-install,post-upgrade,post-rollback
    helm.sh/hook-delete-policy: before-hook-creation
  creationTimestamp: "2021-09-12T01:33:21Z"
  generation: 2
  labels:
    app.kubernetes.io/instance: velero
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: velero
    helm.sh/chart: velero-2.23.6
  name: default
  namespace: velero
  resourceVersion: "26461410"
  uid: 2ffbd94e-3649-4192-b1f6-26593c1ba426
spec:
  config:
    region: us-east-1
    s3ForcePathStyle: "true"
    s3Url: http://<minio_address>:9000
  default: true
  objectStorage:
    bucket: velero
  provider: aws
status:
  lastValidationTime: "2021-09-12T18:05:08Z"
  phase: Unavailable

The credential exists and is mounted to the pods however:

vagrant@control:/code/k8s-gitops$ kubectl -n velero describe pod -l name=velero 
Name:         velero-67c547d658-bvtv7
Namespace:    velero
Priority:     0
... < snipped for brevity>
    Mounts:
      /credentials from cloud-credentials (rw)
      /plugins from plugins (rw)
      /scratch from scratch (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from velero-server-token-ssrf2 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  cloud-credentials:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  velero-s3-creds
    Optional:    false
  plugins:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  scratch:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  velero-server-token-ssrf2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  velero-server-token-ssrf2
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:          <none>

vagrant@control:/code/k8s-gitops$ kubectl describe secret -n velero velero-s3-creds
Name:         velero-s3-creds
Namespace:    velero
Labels:       kustomize.toolkit.fluxcd.io/name=apps
              kustomize.toolkit.fluxcd.io/namespace=flux-system
Annotations:  kustomize.toolkit.fluxcd.io/checksum: f5f71438f60014cebb79536703be7606547cc615

Type:  Opaque

Data
====
cloud:  86 bytes

jgilfoil avatar Sep 12 '21 19:09 jgilfoil

Btw, for what it's worth, the issue I was having that led me here, actually had nothing to do with the credentials not being attached to the backupstoragelocation. I got my backups working without that being set, which kinda leads me to believe that the troubleshooting steps https://velero.io/docs/v1.6/troubleshooting/#troubleshooting-backupstoragelocation-credentials are incorrect, as it works just fine without those creds being set there.

jgilfoil avatar Sep 18 '21 17:09 jgilfoil

Seeing a similar issue as above. If we set velero credentials.useSecret to false, the BSL is still including a credentials and credential.key field and because the secret does not exist the BSL is not becoming available without manual intervention to remove the credential.

error="unable to get credentials: unable to get key for secret: Secret \"\" not found" error.file="/go/src/github.com/vmware-tanzu/velero/internal/credentials/file_store.go:69" error.function="github.com/vmware-tanzu/velero/internal/credentials.(*namespacedFileStore).Path" logSource="pkg/controller/backup_sync_controller.go:175"

cqc5511 avatar Oct 05 '21 15:10 cqc5511

I couldn't get it working iwth existingSecret: bsl-credentials creating secret with aws key per docs:

kubectl create secret generic -n velero bsl-credentials --from-file=aws=/tmp/bsl-credentials.txt

It worked when I changed the aws key to cloud, though:

kubectl create secret generic -n velero bsl-credentials --from-file=cloud=/tmp/bsl-credentials.txt

demisx avatar Dec 31 '21 02:12 demisx

I couldn't get it working iwth existingSecret: bsl-credentials creating secret with aws key per docs:

kubectl create secret generic -n velero bsl-credentials --from-file=aws=/tmp/bsl-credentials.txt

It worked when I changed the aws key to cloud, though:

kubectl create secret generic -n velero bsl-credentials --from-file=cloud=/tmp/bsl-credentials.txt

:thinking: Probably the error in the plugin's README. But the helm chart values.yaml indicates that the key should be cloud.

jenting avatar Jan 03 '22 12:01 jenting

Push for that one - it still seems to be an issue!

Also take a look at #6601.

Rohmilchkaese avatar Aug 08 '23 10:08 Rohmilchkaese