Dotmim.Sync
Dotmim.Sync copied to clipboard
Syncing Large Amounts of Data Causes SQL Transaction Log to Grow Very Large
When syncing a scope using batching there is one transaction. If there is a large amount of data to sync the transaction runs for a long time when applying the data and increases the transaction log exponentially. If it fails during applying and the transaction is rolled back it could take hours for millions of records which makes the DB unusable. It could also go into recovery mode. Sync framework was able to handle this with the 'ApplicationTransactionSize' setting and would create/commit a transaction per batch when applying.
Do you see a workaround or a solution for this?
What happens if we remove transactions during applying changes ? You database will become inconsistent and desynchronized ? How will handle this scenario ?
This is a discussion we have currently here #696 if you want to participate
As far as I remember, MS Sync Framework did not provide those kinds of guarantees. It just send changes in batches, and every batch was committed separately. This can be added as an option, maybe?
But what would happen
- if batch 01 of 02 succeeds
- and batch 02 fails
the client would send both batches again, right? -> when applying batch 01 again, would this be detected as "sync conflict" by DMS?
In the last version you can opt for SyncOptions.TransactionMode
mode
public enum TransactionMode
{
/// <summary>
/// Default mode for transaction, when applying changes
/// </summary>
AllOrNothing,
/// <summary>
/// Each batch file will have its own transaction
/// </summary>
PerBatch,
/// <summary>
/// No transaction during applying changes. very risky
/// </summary>
None
}
If you fill confident, you can change to PerBatch
or None
.
Works good with PerBatch. I was able to sync a 75gb database and the transaction log only grew to 350mb.