resource-agents icon indicating copy to clipboard operation
resource-agents copied to clipboard

Potential Data Loss from lvm_by_vg.sh use of vgremove --removemissing

Open phedders opened this issue 12 years ago • 11 comments

I'd like to understand why there are calls in

resource-agents / rgmanager / src / resources / lvm_by_vg.sh

that use vgreduce --removemissing

(Currently at lines 232)

if ! vgreduce --removemissing --force $OCF_RESKEY_vg_name; then ocf_log err "Failed to make $OCF_RESKEY_vg_name consistent" return $OCF_ERR_GENERIC fi

This has caused problems for us with a major client in this scenario:

  1. Two clustered servers.
  2. New LUNs presented to both servers
  3. New LUNs scanned (echo - - - > scsi_hostX/device/scan) on only the ONE service with the resources active
  4. Failover initiated because of an incomplete change to the cluster config (although the cause here is irrelevant)
  5. Second server cannot 'see' the new LUNs yet and the 'vgremove --removemissing's caused the VG to be "cleaned up" , or imported without the unseen PV's and the LV's contained therein. From this point on, neither server could see the new LV's and the database was unable to start because the VG metadata has been 'broken'.

We were able to recover the situation using vgcfgrestore and archived metadata.

However this is very unexpected behaviour! I would expect the Resource start up to FAIL with an ERROR notice if PV's are missing so that we could fix the problem (simply rescan the scsi_hosts) and not to hose the volume group metadata to the point that it looked initially that we had either major on disc corruption or actual data loss.

So right now this looks like a bug or design fault - and I cannot see a really good reason for the code in question. Please help me understand why its there and consider an alternative approach to making sure the VG can be imported without unexpected side effects.

Many thanks!

phedders avatar Apr 22 '13 09:04 phedders

@davidvossel - I think Jon Brassow wrote that, but I don't recall the specifics of the implementation.

lhh avatar Apr 22 '13 12:04 lhh

Does "vgreduce --removemissing" not seem to be quite a brutal way to work with a volume group? I would normally only ever expect to see that as an emergency recover from a broken disk type of "get me out of trouble somewhat". I would never expect to see that automated and part of routine cluster management! Am I missing something?

phedders avatar Apr 22 '13 20:04 phedders

Yeah, I completely agree with you. This is not something I'd expect the agent to be doing behind the scenes on me at all.

davidvossel avatar Apr 22 '13 20:04 davidvossel

When I first saw the Cleanup messages in my cluster logs I was "intruiged", especially since it took 7 minutes - 7 minutes during which my lvm/archives directory got multiple writes... so I found a variation (RHEL5) of the code above and my first thought was "how EVIL!". I was clinging on to the hope that it wasn't doing what I thought it does, even though I have evidence that it did what I thought it would do... and that there had to be a really good explanation and reason for why it is there.

Thanks. So what are the steps to getting this reviewed and replaced with code that would be less disruptive?

phedders avatar Apr 22 '13 20:04 phedders

Well, I'm not sure there is code that is less disruptive that handles attempting to "clean up" that sort of failure. We need to have that discussion with the person who introduced the logic (Jon Brassow) to make sure though. Maybe you have some suggestions?

perhaps the vgchange --partial option could be of use here.

davidvossel avatar Apr 22 '13 20:04 davidvossel

I'm not sure I understand enough of the purpose of this code to give meaningful suggestions yet...

phedders avatar Apr 22 '13 20:04 phedders

Does anyone know if Jon Brassow is still involved and likely to reading these issue logs? If not I'll try and give him a prod.

Thanks

phedders avatar Apr 26 '13 10:04 phedders

I'll point him to this discussion.

davidvossel avatar Apr 26 '13 14:04 davidvossel

The following commit removes the 'vgreduce --removemissing --force' command in 'vg_start_single' and replaces it with an LV-by-LV approach to activating the logical volumes. RAID LVs are handled differently than 'mirror' LVs and non-redundant LVs that have failed devices cause the service not to start. I think this is exactly the behavior you are looking for. https://github.com/ClusterLabs/resource-agents/commit/518b65f62804c987c6aecea23b5203a6e2760e36

The 'CLVM' method (i.e. 'vg_start_clustered') should change to either:

  1. attempt repair 1 LV at a time, as above
  2. use 'vgreduce --removemissing --force --mirrorsonly <VG>'. This will only repair redundant LVs and will leave non-redundant LVs alone - failing to activate the service (rather than removing PVs under non-redundant LVs) if LVs cannot be activated. Both of these options will give you the behavior you are looking for and are more correct.

jbrassow avatar Apr 26 '13 15:04 jbrassow

Thanks Jon - I'll test that commit as soon as I get the chance. Thanks for your time on it!

phedders avatar Apr 26 '13 16:04 phedders

@phedders all good I presume?

nrwahl2 avatar Jul 14 '20 08:07 nrwahl2