atlasdb
atlasdb copied to clipboard
[TTS] Update A
General
Before this PR: We were not updating A when sweep was progressing.
After this PR: Wire in updating A before sweep makes progress. A is only updated for timestamps that are in fact on transactions schema 4.
Priority: First pass before end of week please
Concerns / possible downsides (what feedback would you like?): If we fail to abort A after building writes batch to be processed, are we guaranteed to abort these transactions at in next iterations.
Is documentation needed?: No
Compatibility
~Does this PR create any API breaks (e.g. at the Java or HTTP layers) - if so, do we have compatibility?:~
~Does this PR change the persisted format of any data - if so, do we have forward and backward compatibility?:~
~The code in this PR may be part of a blue-green deploy. Can upgrades from previous versions safely coexist? (Consider restarts of blue or green nodes.):
~Does this PR rely on statements being true about other products at a deployment - if so, do we have correct product dependencies on these products (or other ways of verifying that these statements are true)?:~
~Does this PR need a schema migration?~
Testing and Correctness
~What, if any, assumptions are made about the current state of the world? If they change over time, how will we find out?:~
What was existing testing like? What have you done to improve it?: Added tests
~If this PR contains complex concurrent or asynchronous code, is it correct? The onus is on the PR writer to demonstrate this.:~
~If this PR involves acquiring locks or other shared resources, how do we ensure that these are always released?:~
Execution
How would I tell this PR works in production? (Metrics, logs, etc.): Pending
~Has the safety of all log arguments been decided correctly?:~
~Will this change significantly affect our spending on metrics or logs?:~
~How would I tell that this PR does not work in production? (monitors, etc.):~
If this PR does not work as expected, how do I fix that state? Would rollback be straightforward?: This will make sweep access coordination service but the state is cached and sweep run as a b/g task so do not expect this to be expensive.
~If the above plan is more complex than “recall and rollback”, please tag the support PoC here (if it is the end of the week, tag both the current and next PoC):~
Scale
Would this PR be expected to pose a risk at scale? Think of the shopping product at our largest stack.: Highly unlikely since we are reading the coordination table more often but the state is cached
Would this PR be expected to perform a large number of database calls, and/or expensive database calls (e.g., row range scans, concurrent CAS)?: I do not think so
Would this PR ever, with time and scale, become the wrong thing to do - and if so, how would we know that we need to do something differently?: Do not expect so
Development Process
Where should we start reviewing?:
CoordinationAwareKnownAbandonedTransactionsStore
If this PR is in excess of 500 lines excluding versions lock-files, why does it not make sense to split it?: It's not 👀
Please tag any other people who should be aware of this PR: @jeremyk-91 @sverma30 @raiju
Generate changelog in changelog/@unreleased
changelog/@unreleased
Type
- [ ] Feature
- [x] Improvement
- [ ] Fix
- [ ] Break
- [ ] Deprecation
- [ ] Manual task
- [ ] Migration
Description
Check the box to generate changelog(s)
- [x] Generate changelog entry
All right, paired offline, we decided to not throw as that would kill sweep but instead log error if we see transactions on greater schema versions. Also will be changing C to be in line with A.
Released 0.733.0