vault
vault copied to clipboard
`vault operator raft snapshot save` and `restore` fail to handle redirection to the active node
Scenario: A 3-node Vault cluster using Raft storage, accessed via a load-balanced URL which can contact any one of the unsealed nodes.
Attempt to use vault operator raft snapshot save
:
If it lands on a standby node, a rather opaque error is produced:
Error taking the snapshot: incomplete snapshot, unable to read SHA256SUMS.sealed file
Attempt to use vault operator raft snapshot restore
:
If it lands on a standby node, a rather opaque error is produced:
Error installing the snapshot: redirect failed: Post "http://172.18.0.11:8200/v1/sys/storage/raft/snapshot": read snapshot.tar.gz: file already closed
Hi there, @maxb! Thanks for this issue report. Our engineering teams are aware of this issue, and we have an item in the backlog to address it. (For my own internal tracking, it's VAULT-4568.) It hasn't been prioritized yet, however, so all I can currently say is to check out future release notes. :)
Same behavior. Any workarounds except executing snapshot operations on the leader node?
i use a small backup script on every node. skipping snapshots on follower nodes.
...
# snapshot if leader
if [ "$(vault operator raft list-peers --format=json | jq --raw-output '.data.config.servers[] | select(.leader==true) | .node_id')" = "$(hostname -a)" ]; then
echo "make raft snapshot $raft_backup/$time.snapshot ..."
/usr/local/bin/vault operator raft snapshot save $raft_backup/$time.snapshot
else
echo "not leader, skipping raft snapshot."
fi
Traced the issue to #14269, the result
is never updated here with the response of the redirected request.
Although the linked PR #17269 has rightly identified a logic bug which should be fixed, it doesn't wholly fix this issue.
Many people may be running Vault behind a loadbalancer, without direct access to individual backend nodes. Just making the vault CLI client process the redirection properly, won't help at all if it doesn't have network access to the redirected URL!
I'm also having the same issue while running Vault within AKS and running the raft snapshot save command on the leader raft pod. Any luck on a solution here?
We had a similar issue to this as well. I find it really weird there is no real solution for it from Hashicorp (proper redirection?), given raft in HA is advised as well.
We run a cluster template in HA of 5 VM's total with raft. We use a MIG in GCP. We had the issue, we couldn't reliably create snapshots, because it would only work if the request would end up at the leader. The load balancer does not allow for you to route it to specific VM"s -> which is logical -> it's a loadbalancer lol.
Our fix was to create a separate backend service, with health checks that check for /v1/sys/leader to see if is_self equals true. This creates a backend, that only sees a single VM as healthy -> the leader. The backend is only used for the related snapshot API call. The load balancer only routes to healthy VM's -> so it always routes correctly. Problem solved.
This tactic can also be used in other cloud environments, so perhaps this helps some people.
We have consistently encountered the same issue with our Vault HA cluster on Kubernetes. Each time a new leader is elected, it necessitates the modification of the leader's VAULT_ADDR address in our cronjob. Essentially, we have set up a cronjob to regularly back up the Vault cluster and synchronize it with a GCP bucket.
Is there a way to dynamically determine the runtime leader and direct requests solely to the current leader of the cluster? Below is a snippet of the cronjob for your reference, and we welcome any further suggestions you may have. Your assistance is greatly appreciated.
apiVersion: batch/v1 kind: CronJob metadata: name: vault-snapshot-cronjob namespace: vault-secrets-server spec: schedule: "0 0 * * *" jobTemplate: spec: template: spec: serviceAccountName: vault-snapshotter volumes: - name: gcs-credentials secret: secretName: gcs-credentials - name: backup-dir emptyDir: {} containers: - name: backup image: vault:1.12.1 imagePullPolicy: IfNotPresent env: - name: VAULT_ADDR value: http://vault-server-1.vault-server-internal:8200 command: ["/bin/sh", "-c"] args: - | SA_TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token); export VAULT_TOKEN=$(vault write -field=token auth/kubernetes/login jwt=$SA_TOKEN role=vault-backup); vault operator raft snapshot save /data/vault-raft.snap; sleep 120; volumeMounts: - name: backup-dir mountPath: /data - name: snapshotupload image: google/cloud-sdk:latest imagePullPolicy: IfNotPresent command: ["/bin/sh", "-c"]
args: |
---|
until [ -f /data/vault-raft.snap ]; do sleep 120; done;
gcloud auth activate-service-account --key-file=/data/credentials/service-account.json;
gsutil cp /data/vault-raft.snap gs://$bucket_name/vault_raft_$(date +"%Y%m%d_%H%M%S").snap;
volumeMounts:
- name: gcs-credentials
mountPath: /data/credentials
readOnly: true
- name: backup-dir
mountPath: /data
restartPolicy: OnFailure
ttlSecondsAfterFinished: 900
I'm experiencing the same issue after moving to internal storage despite even electing a new leader or performing the snapshot using the root token.
/ # vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
vault-0 vault-0.vault-internal:8201 leader true
vault-1 vault-1.vault-internal:8201 follower true
vault-2 vault-2.vault-internal:8201 follower true
/ # export 'VAULT_ADDR=https://vault-0.vault-internal:8200'
/ # vault operator raft snapshot save /dumps/vault-20240711-062200.snap
Error taking the snapshot: incomplete snapshot, unable to read SHA256SUMS.sealed file
/ # vault operator raft list-peers
Node Address State Voter
---- ------- ----- -----
vault-0 vault-0.vault-internal:8201 follower true
vault-1 vault-1.vault-internal:8201 leader true
vault-2 vault-2.vault-internal:8201 follower true
/ # export 'VAULT_ADDR=https://vault-1.vault-internal:8200'
/ # vault operator raft snapshot save /dumps/vault-20240711-062200.snap
Error taking the snapshot: incomplete snapshot, unable to read SHA256SUMS.sealed file
/ # vault operator raft snapshot inspect /dumps/vault-20240711-062200.snap