go-algorand
go-algorand copied to clipboard
WIP txHandler: do not rebroadcast to peers sent duplicate messages
Summary
When the transaction deduplication was added it had a side effect of re-sending transactions to peers sent a message the handler has seen before. This PR fixes this.
Before dedup if A sent T, all but A get T, then, if B sent us T, noone gets T again With dedup, if A sent T, all but A get T, then, if B sent us T, noone gets T again. With this PR: if A and B sent T almost simultaneously all but A and B get T
Implementation overview:
-
txSaltedCache
how stores map of seen peers for a particular message -
CheckAndPut
returns this map to to get rid additional lookups. - The map is later used in
RelayArray
to let networking know whom to skip.
Implementation Details
CheckAndPut
is not more complex because it needs to update the map value even if there is a match.
The original idea with fast check under a read lock is preserved but innerCheck
also returns a current value and a "page" (cur/prev map) where it was found in order to update. Note innerCheck
can return prev page and this also need to be considered when running an update with a write lock taken.
The found && senderFound
denotes a new fast path without modification underlying maps.
Note, a reference data struct (*sync.Map
in this case) is crucial to have the implementation to work because of the following scenario:
- Received txn A from N1, wrote to the cache, the the cache value (peers list) is attached to a work item
txBacklogMsg
(wi
) of this transaction. - Received txn A from N2, updated the peers list. The reference in
wi
is updated automatically. - It is time to relay, give the network the same peers list
- Received txn A from N3, updated to the cache value (peers list)
- The network has updated peers list with N3
- After broadcasting received txn A from N4, updated to the cache value (peers list), but b/c it is a duplicate it would not make its path to the network.
Test Plan
Added a new unit test
Benchmarks
master
BenchmarkDigestCaches/data.digestCacheMaker/threads=1-8 1000000 1575 ns/op
BenchmarkDigestCaches/data.saltedCacheMaker/threads=1-8 769404 1886 ns/op
BenchmarkDigestCaches/data.digestCacheMaker/threads=4-8 1347638 992.8 ns/op
BenchmarkDigestCaches/data.saltedCacheMaker/threads=4-8 777824 2042 ns/op
BenchmarkDigestCaches/data.digestCacheMaker/threads=16-8 1248975 984.3 ns/op
BenchmarkDigestCaches/data.saltedCacheMaker/threads=16-8 709402 2234 ns/op
BenchmarkDigestCaches/data.digestCacheMaker/threads=128-8 1233063 996.1 ns/op
BenchmarkDigestCaches/data.saltedCacheMaker/threads=128-8 619723 2289 ns/op
feature
BenchmarkDigestCaches/data.digestCacheMaker/threads=1-8 978464 1460 ns/op
BenchmarkDigestCaches/data.saltedCacheMaker/threads=1-8 575578 2303 ns/op
BenchmarkDigestCaches/data.saltedCacheDupMaker/threads=1-8 550164 2238 ns/op
BenchmarkDigestCaches/data.digestCacheMaker/threads=4-8 1395024 877.9 ns/op
BenchmarkDigestCaches/data.saltedCacheMaker/threads=4-8 499202 2773 ns/op
BenchmarkDigestCaches/data.saltedCacheDupMaker/threads=4-8 651159 1883 ns/op
BenchmarkDigestCaches/data.digestCacheMaker/threads=16-8 1308243 948.8 ns/op
BenchmarkDigestCaches/data.saltedCacheMaker/threads=16-8 430381 3130 ns/op
BenchmarkDigestCaches/data.saltedCacheDupMaker/threads=16-8 954093 1338 ns/op
BenchmarkDigestCaches/data.digestCacheMaker/threads=128-8 1305469 952.8 ns/op
BenchmarkDigestCaches/data.saltedCacheMaker/threads=128-8 443257 3026 ns/op
BenchmarkDigestCaches/data.saltedCacheDupMaker/threads=128-8 1756513 686.5 ns/op