WARNING! Unable to read storage migration status.
I have deployed Vault on k8s in HA and want to use consul as storage. The pods are not running.
vault-helm-0 0/1 Running 0 162m
vault-helm-1 0/1 Running 0 162m
vault-helm-2 0/1 Running 0 162m.
Logs
WARNING! Unable to read storage migration status.
2020-04-23T18:27:15.699Z [INFO] proxy environment: http_proxy= https_proxy= no_proxy=
2020-04-23T18:27:15.701Z [WARN] storage migration check error: error="Get http://127.0.0.1:8500/v1/kv/vault/core/migration: dial tcp 127.0.0.1:8500: connect: connection refused"
Describing the Pod
Name: vault-helm-0
Namespace: spr-xxx
Priority: 0
PriorityClassName: <none>
Node: qa4-apps-k8s-node-202003241110-10-1a/10.xx.xxx.xxx
Start Time: Thu, 23 Apr 2020 18:27:13 +0000
Labels: app.kubernetes.io/instance=vault-helm
app.kubernetes.io/name=vault
component=server
controller-revision-hash=vault-helm-764cc498f5
helm.sh/chart=vault-0.5.0
statefulset.kubernetes.io/pod-name=vault-helm-0
Annotations: cni.projectcalico.org/podIP: 192.168.43.48/32
kubernetes.io/limit-ranger: LimitRanger plugin set: cpu, memory request for container vault; cpu, memory limit for container vault
Status: Running
IP: 192.168.xx.xx
Controlled By: StatefulSet/vault-helm
Containers:
vault:
Container ID: docker://a0e8c5b0ac6c181ea0b4a8871edf4a41967780520e3ff2be1c3d7b183518fe60
Image: vault:1.3.2
Image ID: docker-pullable://vault@sha256:cf9d54f9a5ead66076066e208dbdca2094531036d4b053c596341cefb17ebf95
Ports: 8200/TCP, 8201/TCP, 8202/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
Command:
/bin/sh
-ec
Args:
sed -E "s/HOST_IP/${HOST_IP?}/g" /vault/config/extraconfig-from-values.hcl > /tmp/storageconfig.hcl;
sed -Ei "s/POD_IP/${POD_IP?}/g" /tmp/storageconfig.hcl;
/usr/local/bin/docker-entrypoint.sh vault server -config=/tmp/storageconfig.hcl
State: Running
Started: Thu, 23 Apr 2020 18:27:15 +0000
Ready: False
Restart Count: 0
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 500m
memory: 256Mi
Readiness: exec [/bin/sh -ec vault status -tls-skip-verify] delay=5s timeout=5s period=3s #success=1 #failure=2
Environment:
HOST_IP: (v1:status.hostIP)
POD_IP: (v1:status.podIP)
VAULT_K8S_POD_NAME: vault-helm-0 (v1:metadata.name)
VAULT_K8S_NAMESPACE: spr-xxx (v1:metadata.namespace)
VAULT_ADDR: https://127.0.0.1:8200
VAULT_API_ADDR: https://$(POD_IP):8200
SKIP_CHOWN: true
SKIP_SETCAP: true
HOSTNAME: vault-helm-0 (v1:metadata.name)
VAULT_CLUSTER_ADDR: https://$(HOSTNAME).vault-helm-internal:8201
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from vault-helm-token-ptt4p (ro)
/vault/config from config (rw)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: vault-helm-config
Optional: false
vault-helm-token-ptt4p:
Type: Secret (a volume populated by a Secret)
SecretName: vault-helm-token-ptt4p
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 89s default-scheduler Successfully assigned spr-ops/vault-helm-0 to qa4-apps-k8s-node-202003241110-10-1a
Normal Pulled 88s kubelet, k8s-node-202003241110-10-1a Container image "vault:1.3.2" already present on machine
Normal Created 87s kubelet, k8s-node-202003241110-10-1a Created container
Normal Started 87s kubelet, k8s-node-202003241110-10-1a Started container
Warning Unhealthy 18s (x22 over 81s) kubelet, k8s-node-202003241110-10-1a Readiness probe failed: Error checking seal status: Get https://127.0.0.1:8200/v1/sys/seal-status: dial tcp 127.0.0.1:8200: connect: connection refused
Consul is running as standalone. (Not in K8s)
values.yaml config added. ha: enabled: true replicas: 3
config: | ui = true
listener "tcp" {
tls_disable = 1
address = "10.xx.xxx.xx7:8200"
cluster_address = "https://10.xx.xxx.xx7:8201"
}
storage "consul" {
path = "vault/"
address = "127.0.0.1:8500"
}
Another question is , vault should have consul client running, But, consul is not running and I am receiving error even while passing the setting the variable env.
kubectl exec vault-helm-0 -it sh
/ $ export VAULT_ADDR=http://127.0.0.1:8200
/ $ vault -v
Vault v1.3.2
/ $ vault operator init -n 1 -t 1
Error initializing: Put http://127.0.0.1:8200/v1/sys/init: dial tcp 127.0.0.1:8200: connect: connection refused
/ $ ps -ef | grep consul
28322 vault 0:00 grep consul
Hey, did u solve it?
Has anyone been able to find the solution? I am facing a similar issue.
Following up on this, facing a very similar issue.
Just FYI for others ... I found that if you 'describe' the deployment or sh into the container you'll see that vault cannot communicate with the backend. I had to add "ha.raft.enabled=true" and not use consul and got past this error.
@aram535 I had to do the same.. so the helm chart mentions that consul will be auto deployed but in my case when i did a dry-run i didn't notice any consul deployments.. :/
Anyone able to solve this with Consul as backend ?
Anyone able to solve this with Consul as backend ?
I had the same issue. The solution here help me. To be exact, I used the Consul serviceName in vault's helmchart. Then restarted vault.
i can't solve in vault