lvm-localpv icon indicating copy to clipboard operation
lvm-localpv copied to clipboard

Volumes are formatted when they shouldn't

Open davidkarlsen opened this issue 2 years ago • 4 comments

What steps did you take and what happened: We had an issue where we replaced the lvmvolume CRDs. Previosly we checked in the CRDs alongside the helm-chart, after the change, we set flux to

upgrade:
    crds: CreateReplace

this in order to make CRDs be handled with the chart.

This however set deletion-timestamps on the lvmvolume CRs, so in order to avoid the volumes being deleted on reboots, I did

kubectl -n openebs get lvmvolumes -o yaml > /tmp/orig.yaml
cat /tmp/orig.yaml | grep -v '\- lvm.openebs.io/finalizer' |sed s/finalizers:/finalizers:\ []/ > /tmp/todelete.yaml
cat /tmp/orig.yaml | grep -v deletion > /tmp/toapply.yaml

then shut down all the openebs controllers, applied /tmp/todelete.yaml (in order to delete the resources, then applied /tmp/toapply.yaml to get them back again, started the openebs controllers. 

Everything then seems fine, but what we see now, is that if we reboot a node, the volumes comes back empty (!). There are nothing in the logs or events about the volumes being formatted, they just are empty of data.

What can be the cause of this and how can we get out of this serious situation?

What did you expect to happen: Data being preserverd and the volumes mounted as noraml

The output of the following commands will help us better understand what's going on: (Pasting long output into a GitHub gist or other Pastebin is fine.)

  • kubectl logs -f openebs-lvm-controller-0 -n kube-system -c openebs-lvm-plugin: https://gist.github.com/davidkarlsen/7adf3b247002f77cfd821d13f688b838
  • kubectl logs -f openebs-lvm-node-[xxxx] -n kube-system -c openebs-lvm-plugin: https://gist.github.com/davidkarlsen/e99f2a9a2c7d2b407d73ce539cc67aaf
  • kubectl get pods -n kube-system: no pods
  • kubectl get lvmvol -A -o yaml https://gist.github.com/davidkarlsen/6c62b019f5f5624598c618c90bd92805

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

Environment:

  • LVM Driver version
  • Kubernetes version (use kubectl version):
  • Kubernetes installer & version:
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):

davidkarlsen avatar May 30 '22 08:05 davidkarlsen

The core of the problem is that openebs sees the volume as not formatted:

1 mount_linux.go:366] Disk "/dev/datavg/XXX" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /dev/datavg/XXX]

davidkarlsen avatar May 30 '22 09:05 davidkarlsen

Did you managed to fix it?

rafaribe avatar Feb 20 '23 19:02 rafaribe

The core of the problem is that openebs sees the volume as not formatted:

1 mount_linux.go:366] Disk "/dev/datavg/XXX" appears to be unformatted, attempting to format as type: "ext4" with options: [-F -m0 /dev/datavg/XXX]

@davidkarlsen It's been long, but did you somehow resolve this for yourself or you still have this issue in some form? The mount message w.r.t unformatted appears to be from kubernetes mount utils code.

dsharma-dc avatar Jun 11 '24 09:06 dsharma-dc

@dsharma-dc no longer using openebs myself - but I put in a fix in k8s core to force format xfs and we were OK.

davidkarlsen avatar Jun 13 '24 13:06 davidkarlsen