Be more careful with locking db.db_mtx
Lock db->db_mtx in some places that access db->db_data. But don't lock it in free_children, even though it does access db->db_data, because that leads to a recurse-on-non-recursive panic.
Lock db->db_rwlock in some places that access db->db.db_data's contents.
Closes #16626 Sponsored by: ConnectWise
Motivation and Context
Fixes occasional in-memory corruption which is usually manifested as a panic with a message like "blkptr XXX has invalid XXX" or "blkptr XXX has no valid DVAs". I suspect that some on-disk corruption bugs have been caused by this same root cause, too.
Description
Always lock dmu_buf_impl_t.db_mtx in places that access the value of dmu_buf_impl_t.db->db_data. And always lockdmu_buf_impl_t.db_rwlock in places that access the contents of dmu_buf_impl_t.db->db_rwlock.
Note that free_children still violates these rules. It can't easily be fixed without causing other problems. A proper fix is left for the future.
How Has This Been Tested?
I cannot reproduce the bug on command, so I had to rely on statistics to validate the patch.
- Since the beginning of 2025, servers running the vulnerable workload on FreeBSD 14.1 without this patch have crashed with a probability of 0.34% per server per day. The distribution of crashes fits a Poisson distribution, suggesting that each crash is random and independent. That is, a server that's already crashed once is no more likely to crash in the future than one which hasn't crashed yet.
- Servers running the vulnerable workload on FreeBSD 14.2 with this patch have accumulated a total of 1301 days of uptime with no crashes. So I conclude with 98.8% confidence that the 14.2 upgrade combined with the patch is effective.
- Servers running the vulnerable workload on FreeBSD 14.2 without the patch are too few to draw conclusions about. But I don't see any related changes in the diff between 14.1 and 14.2. So I think that the patch is responsible for the cessation of crashes, not the upgrade.
Types of changes
- [x] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Performance enhancement (non-breaking change which improves efficiency)
- [ ] Code cleanup (non-breaking change which makes code smaller or more readable)
- [ ] Quality assurance (non-breaking change which makes the code more robust against bugs)
- [ ] Breaking change (fix or feature that would cause existing functionality to change)
- [ ] Library ABI change (libzfs, libzfs_core, libnvpair, libuutil and libzfsbootenv)
- [ ] Documentation (a change to man pages or other documentation)
Checklist:
- [x] My code follows the OpenZFS code style requirements.
- [ ] I have updated the documentation accordingly.
- [x] I have read the contributing document.
- [ ] I have added tests to cover my changes.
- [x] I have run the ZFS Test Suite with this change applied.
- [x] All commit messages are properly formatted and contain
Signed-off-by.
As I see, in most of cases (I've spotted only one different) when you are taking db_rwlock, you also take db_mtx. It makes no sense to me, unless the only few exceptions are enormously expensive or otherwise don't allow db_mtx to be taken. I feel like we need some better understanding of locking strategy. At least I do.
FWIW, as we're discussing here, I even think - after all the staring at the code - that the locking itself is actually fine, it seems to be a result of optimizations exactly because things don't need to be overlocked if it's guaranteed to be OK via other logical dependencies.
I think I have actually nailed where the problem is, but @asomers says he can't try it :)
As I see, in most of cases (I've spotted only one different) when you are taking
db_rwlock, you also takedb_mtx. It makes no sense to me, unless the only few exceptions are enormously expensive or otherwise don't allowdb_mtxto be taken. I feel like we need some better understanding of locking strategy. At least I do.
That's because of this comment from @pcd1193182: "So the subtlety here is that the value of the db.db_data and db_buf fields are, I believe, still protected by the db_mtx plus the db_holds refcount. The contents of the buffers are protected by the db_rwlock." So many places need both db_mtx and db_rwlock. Some need only the former. I don't know of any cases where code would only need the latter.
I'm sorry, I mixed it up. This is definitely needed and then there's a bug with dbuf resize. Two different things.
@asomers Are you still awaiting reviewers on this? I've been running with the changes from this PR without any issues for a while now. It would be nice to get in all the "prevents corruption" PRs before 2.4.0.
Does this apply to 2.2.8 also?
Though I see your comments, @amotin , I still struggle to understand the right thing to do, generally, because the locking requirements aren't well documented, nor are they enforced either by the compiler or at runtime. Here are the different descriptions I've seen:
From dbuf.h:
db.db_data, which is protected by db_mtx
...
[db_rwlock] Protects db_buf's contents if they contain an indirect block or data block of the meta-dnode
And here's what @pcd1193182 said in https://github.com/openzfs/zfs/discussions/17118 :+1:
The value of the `db.db_data` and `db_buf` fields
are protected by `db_mtx` plus the `db_holds` refcount. The contents are
protected by `db_rwlock`. `db_mtx` is also responsible for protecting some of
the other parts of the dbuf state.
And later
dbufs have different states,and when they are in these different states, they can only be accessed in
certain ways.
But I don't see any list of what the various states are, nor how to tell which state a dbuf is in.
@amotin added the following in that same discussion thread:
db_rwlock protect content of buffers that are parent (indirect or dnode) of
some other buffer, and we need to either write or read the block pointer of the
buffer, either directly or via de-referencing the pointer of db_blkptr pointing
inside it. All the parent buffers permanently referenced so can not be evicted,
and have only one copy, so their memory should never be reallocated, so db_mtx
protection is not required in this case.
And @amotin added some more detail in this PR:
- "If the db_dirtycnt below is zero (and it should be protected by db_mtx), then the buffer must be empty."
- "Indirects don't relocate."
- "meta-dnode dbufs are not relocatable"
- "db_rwlock didn't promise to protect [L0 blocks]"
I can't confidently make any changes here without a complete and accurate description of the locking comments. What I need are:
- Complete and accurate documentation in dbuf.h
- A way to enforce those requirements at runtime. Perhaps a macro that asserts that a
db_bufis locked, or else doesn't need to be locked based on other data in thedmu_buf_impl, and can be called everywhere thatdb_bufis accessed. And a similar macro fordb.db_data.
@amotin can you please help with that? At least with the first part?
@asomers Let me rephrase the key points:
- Indirects and L0 dnode dbufs are special in having only one data copy ever. They are always decompressed in memory, and if need do be decrypted (only bonus parts of dnode L0 can be encrypted, indirects are only signed), then it is done in place. It means they are never relocated in memory, so we don't need
db_mtxto protect theirdb.db_data. And as long as we hold a reference on those dbufs, they can not be evicted and so change their state. This removes most ofdb_mtxacquisitions you've added. db_rwlockis designed to protect specifically indirects and L0 dnode blocks from torn writes when they are modified by sync context, but read by anything else.db_rwlockis not intended to protect any user data dbufs, modified only in open context. For those we have range locks, etc. This removes most ofdb_rwlockacquisitions you've added.
My humble opinion. I think it is reasonable request to:
- accurately document specifically what each lock is responsible for and in what states locking is required; enumerate possible states which require different approaches.
- add additional debug assertions to make it clear which code path have the lock already held.
- in places where locking is not needed due to single use - poison somehow the locks in debug mode to make any unexpected use crash
- in places where object is not reallocatable - add macro which makes it clear that locking is not needed and checks that the object is indeed not rellocatable.
It is good to have optimizations, but it is not healthy that there is very limit knowledge of the locking scheme in small group of people with poor documentation and inability to examine the code for correctness.
@asomers Despite my comments on many of the changes here, IIRC there were some that could be useful. Do you plan to clean this up, document, etc, or I'll have to take it over?
@asomers Despite my comments on many of the changes here, IIRC there were some that could be useful. Do you plan to clean this up, document, etc, or I'll have to take it over?
Yes. My approach is to create some assertion functions which check that either db_data is locked, or is in a state where it doesn't need to be. The WIP is here, but it isn't ready for review yet. Probably next week. https://github.com/asomers/zfs/tree/db_data_elide .
@amotin I've eliminated the lock acquisitions as you requested. Please review. Note that while I've run the ZFS test suite with this round of changes, I don't know whether they suffice to solve the original corruption bug. The only way to know that is to run the code in production. But I'd like your review before I try that, because it takes quite a bit of time and effort to get sufficient production time. Not to mention the risk of corrupting customer data again.
@asomers if you can rebase this on the latest commits in that master branch that should resolve most of the CI build failures. While you're at if please go ahead and squash the commits.
@behlendorf I've squashed and rebased. However, it's important remember that I've never been able to produce this bug on demand. I've only seen it in production. And this version of the PR was never tested in production, so I can't guarantee that it actually fixes the original bug.
Yup understood.
That new panic must be the result of removing the db_dirtycnt check. Oddly, I can't reproduce the panic locally.