local-path-provisioner
                                
                                 local-path-provisioner copied to clipboard
                                
                                    local-path-provisioner copied to clipboard
                            
                            
                            
                        Please describe how to restore PV/PVC when recreating a cluster.
I'm planning to nuke my cluster and recreate it. With GKE we can easily do this by making the PV/PVC part of the Chart/Template. Unfortunatly this seems to be more tricky with the local path provisioner.
So, I would expect a process like this:
- kubectl get pv,pvc > storage.yaml
- tar czf backup.tar.gz /var/lib/rancher/k3s/storage/
- /usr/local/bin/k3s-uninstall.sh
- Reinstall k3s
- tar xzf backup.tar.gz
- kubectl create storage.yml
Expected: local-path-provisioner detects all existing disks and pvc/pv relations and recreates them
Actual: PVs are stuck in state 'Released' and PVCs in state 'Lost'. When the ReclaimPolicy is 'Delete' instead of 'Retain' your data is automatically deleted as well (which is more or less fair, but unexpected as well)
Is there some kind of procedure to achieve that? I know the local-path-provisioner has simplicity as main goal, but some kind of restoration would be very useful.
What worked for me to work around the PVC Lost issue was to remove the claimRef from the PV definitions when saving them.
My procedure is something like (I have many storage classes, with different provisioners. the local path provisioner only serves sc-1 and sc-2):
$ kubectl get pv -A -o json | jq '.items[] | select(.spec.storageClassName == "sc-1-name" or .spec.storageClassName == "sc-2-name")' | jq -M 'del(.spec.claimRef)' > volumes.json
$ kubectl get pvc -A -o json | jq '.items[] | select(.spec.storageClassName == "sc-1-name" or .spec.storageClassName == "sc-2-name")' > claims.json
I was able to do it now with a more recent version.
steps:
- First apply restore the data to the defined path (see PV)
- apply the PV
- apply the PVC
- apply the STS (or scale up)
I leave the ticket open, IMHO this could be documented.
This is the pvc and pv config:
PV
apiVersion: v1
kind: PersistentVolume
metadata:
  annotations:
    pv.kubernetes.io/provisioned-by: rancher.io/local-path
  name: pvc-e310b4c7-8a63-45bd-8176-1188fd586706
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 10Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: postgresql-data-postgresql-0
    namespace: default
  hostPath:
    path: /var/lib/rancher/k3s/storage/pvc-e310b4c7-8a63-45bd-8176-1188fd586706_default_postgresql-data-postgresql-0
    type: DirectoryOrCreate
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: kubernetes.io/hostname
          operator: In
          values:
          - moris
  persistentVolumeReclaimPolicy: Retain
  storageClassName: local-path
  volumeMode: Filesystem
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-provisioner: rancher.io/local-path
  labels:
    app.kubernetes.io/name: postgresql
  name: postgresql-data-postgresql-0
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: local-path
  volumeMode: Filesystem
  volumeName: pvc-e310b4c7-8a63-45bd-8176-1188fd586706
and an example sts
STS
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: postgresql
  namespace: default
  labels:
    app.kubernetes.io/name: postgresql
    app.kubernetes.io/component: database
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: postgresql
  serviceName: "postgresql"
  template:
    metadata:
      labels:
        app.kubernetes.io/name: postgresql
        app.kubernetes.io/component: database
    spec:
      terminationGracePeriodSeconds: 600
      containers:
      - name: postgres
        image: postgres:13.4
        livenessProbe:
          exec:
            command: ["psql", "-w", "-U", "postgres", "-d", "postgres", "-c", "SELECT 1"]
          initialDelaySeconds: 60
          timeoutSeconds: 30
        readinessProbe:
          exec:
            command: ["psql", "-w", "-U", "postgres", "-d", "postgres", "-c", "SELECT 1"]
          initialDelaySeconds: 5
          timeoutSeconds: 30
        lifecycle:
          preStop:
            exec:
              command: ["gosu", "postgres", "pg_ctl", "stop", "--mode", "fast", "-W"]
        args: ["-c", "config_file=/etc/postgresql/postgresql.conf"]
        ports:
        - containerPort: 5432
          name: postgresql-port
        volumeMounts:
        - name: config
          mountPath: /etc/postgresql
        - name: postgresql-data
          mountPath: /var/lib/postgresql/data
          readOnly: false
        envFrom:
          - secretRef:
              name: postgres-credentials
        env:
        - name: PGDATA
          value: "/var/lib/postgresql/data/pgdata"
      volumes:
        - name: config
          configMap:
            name: postgresql-config
  volumeClaimTemplates:
  - metadata:
      name: postgresql-data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi