Implement Detective Mining (Pool Software Change, No Protocol Change Required)
We should encourage pools to adopt the "detective mining" strategy from Lee and Kim, which leverages information pools already leak in their job messages to collapse a selfish miner's private lead. Crucially, this is deployable at the pool or Stratum-proxy level with no consensus or protocol changes.
Paper: https://eprint.iacr.org/2019/486
Thanks to @AJ__1337 on X for highlighting the paper.
One-paragraph explainer
Selfish pools must distribute jobs that tell their workers which parent to mine on (eg. Qubic uses pool.qubic.li). Those jobs reveal the pool's private tip via the previous block hash. A detective miner subscribes to those jobs, detects when the leaked prevhash does not match the public tip, and immediately mines a "counter" child block on top of the selfish pool's hidden parent. Publishing that child forces the selfish pool to reveal its private block or lose it, repeatedly cutting off the attacker's lead and eroding their advantage. The original paper models and simulates this dynamic and shows the attacker's extra revenue disappears as detective adoption grows.
Practically, if ~50% of the Monero hashrate (the largest pools) implement this, then the selfish miner’s break-even jumps to roughly 42 percent if tie-breaking favours the honest chain, ~37 percent under neutral tie-breaking, and ~32 percent if tie-breaking favours the attacker (see Figure 11 in the paper). Comparatively, the baseline without detective miners is the classical Eyal–Sirer threshold that can be near 25 percent when tie-breaking favours the attacker, rising toward one third otherwise. If we are between 50% and 100% hashrate adoption across pools then it wipes out the ability to selfish-mine entirely across the tested splits (see Figures 14 and 15).
Why this is pool-side: Stratum job messages include a prevhash or equivalent parent reference that pools must share with miners. In Stratum v1 job payloads this field is explicit, and in Stratum v2 there is a dedicated SetNewPrevHash message. That is the hook detective mining exploits.
Actionable plan for pools and stratum proxies
Scope. All work is at the pool or proxy layer. No protocol or consensus changes are needed.
1) Instrumentation and detection
- [ ] Add a lightweight "sensor" client that subscribes to competing pools' job streams. Capture the current prevhash and job IDs.
- [ ] Continuously compare the observed prevhash with your node's public tip. If it diverges for more than T seconds, flag a likely private branch.
2) Counter-template construction
- [ ] When a leak is detected, construct a valid block template whose parent equals the leaked prevhash. Keep the template minimal if needed. For Bitcoin-style stacks this means a legal header, coinbase, and a conservative transaction set.
3) Hash allocation and switching
- [ ] Immediately redirect a configurable fraction of your pool's hashrate to the counter-template, even if that means the entire pool's hashrate uses this new template. Revert to public-tip templates when the competitor's jobs return to the public parent.
4) Submission and networking
- [ ] On a find, broadcast your child block immediately. The selfish pool must then publish its private parent to avoid losing it, collapsing its lead.
Why this helps
- Mechanism. Detective mining repeatedly cuts the attacker's private lead by forcing reveals whenever the attacker tries to extend a hidden branch.
- Practicality. It exploits fields pools already expose in job messages, and it works today with Stratum, which means it works with Monero.
- Economics. The paper's simulations show the attacker's excess revenue shrinks quickly as the detective share grows, and multiple selfish pools cannibalise one another even more once detective miners are active.
Pool Software Anti-Decoy Counter-Measures
Pool software should include some sneakiness to prevent their sensor probes from being detected. Here's a checklist:
Detection and corroboration
- Require quorum: do not act unless the same off-tip prevhash is observed by multiple independent sensors on different IP ranges and regions.
- Verify persistence: the off-tip parent must persist across multiple
mining.notifymessages or SV2SetNewPrevHashupdates for T seconds before diverting any hash. - Share-acceptance check: each sensor submits a few shares on the leaked job and proceeds only if the pool accepts them, proving the job is live and not a read-only decoy.
- P2P sanity check: confirm the leaked parent is not yet known to your full node or the public tip.
Sensor hygiene
- Make sensors hard to fingerprint: rotate IPs and providers, vary user agents, and keep normal share cadence and difficulties so targeted fake jobs are harder to aim at. Stratum sessions are per-connection with per-connection state (extranonce1), so blend in.
- Use multiple ingress points: subscribe to several pool endpoints when available and compare job streams in real time.
Activation policy
- Two-stage diversion: on a valid signal, divert a tiny fraction H of hash first, auto-ramp only if corroboration continues during a grace window. Roll back automatically if any check fails. The detective-mining literature supports that even modest detective share reduces attacker advantage.
- Rate-limit triggers: bound how often you will divert within a rolling window and cap the maximum detective-hash allocation to limit worst-case decoy cost.
- Cross-sensor consistency: abort if the pool delivers inconsistent prevhashes across your own sensors in close time proximity, which is a strong indicator you are being singled out. Stratum allows different jobs per connection, so inconsistency is meaningful.
Telemetry and audits
- Log every incident: prevhash, job IDs, times seen, which sensors corroborated, share-acceptance results, diversion start/stop, and outcomes.
- Track false positives and orphan rate during trials, then tighten thresholds based on empirical results.
Can't a malicious pool (as we're discussing here) lie about having a private chain to cause other pools to divert hashpower? The counter seems to be 'the malicious pool is itself distributing work on an alt-chain', but I wouldn't be surprised if a pool could identify the connections another pool is listening via.
Can't a malicious pool (as we're discussing here) lie about having a private chain to cause other pools to divert hashpower? The counter seems to be 'the malicious pool is itself distributing work on an alt-chain', but I wouldn't be surprised if a pool could identify the connections another pool is listening via.
Yes but that's easy to mitigate - a pool could easily operate their sensors on different IP addresses with different addresses, and only shift hashrate if a quorum of sensors see the same prev_id. The pool software should also require the leaked parent to persist across time and messages, not a single notify or SetNewPrevHash, and then abort if the pool flips parents or sends mutually inconsistent jobs across your own connections, which is a strong sign you’ve been singled out. Also there's no need for the whole hashrate to flip to that block - even if single-digit percentages mine it, it forces the malicious pool to disclose.
Added an anti-decoy counter-measure checklist to the issue.
The selfish pool must then publish its private parent to avoid losing it, collapsing its lead.
No they won't publish it. If a block at height N+1 is private and someone mines a child block at height N+2, it will not become the longest chain because of the missing N+1. It will just dangle in node's memory, or will be rejected.
No they won't publish it. If a block at height N+1 is private and someone mines a child block at height N+2, it will not become the longest chain because of the missing N+1. It will just dangle in node's memory, or will be rejected.
You’re right that an N+2 child without its N+1 parent will not be accepted. That isn’t the claim. Detective mining doesn’t magic N+2 onto the main chain; it changes the attacker’s pay-off so they either reveal N+1 now or eat the loss.
Two outcomes once the detective child is broadcast:
-
Attacker reveals. To realise any revenue from their private branch, the selfish pool must publish N+1 so the network can validate the detective’s N+2. Publishing early collapses their private lead back toward parity and neuters the selfish-mining advantage. This is exactly the dynamic modelled in the detective-mining paper (see Fig. 2(c)–(d); the authors explicitly assume the rational response is to release the private blocks when a child appears)
-
Attacker withholds. Then they forfeit N+1’s reward and keep hashing on a branch that other pools can keep targeting with further detective children. Meanwhile, the public chain advances. Economically this is worse than revealing, which is why the paper’s state machine and simulations show the attacker’s extra revenue shrinking as detective share grows
Network mechanics are straightforward: nodes that see a block whose parent is unknown don’t attach it; they request or await the missing parent and hold the child as an orphan until the parent arrives. That’s normal Bitcoin/Monero behaviour and precisely why the attacker must release the parent to get paid.
So the sentence should read more precisely: “Publishing the detective child forces a choice. If the selfish pool wants any revenue from its private fork, it must reveal the parent now, which collapses its lead; otherwise it burns its private blocks.” That is the incentive lever detective mining exploits, and it requires only pool-side changes, not protocol changes.
If an attacker withholds, and mines their own version of N+2, then what will happen? Detectives will mine on top of attacker's N+2, or their own N+2?
This is with known dark pools. How do we counter unknown pools?
No they won't publish it. If a block at height N+1 is private and someone mines a child block at height N+2, it will not become the longest chain because of the missing N+1. It will just dangle in node's memory, or will be rejected.
Not sure if it's better here or as a new proposal, but to allow mining without the having N+1, you could change the block hashing structure to be longer, and have x blocks hashes. So for example if you wanted to prevent a selfish mine of 5 blocks, you could have the block_hash_blob that is fed into randomx be <block n-5><block n-4>...<block n-1><block_hashing_blob>.
The reason this needs to be in the block_hashing_blob is so that selfish pools must publish it to their miners. In this case I believe the miners for qubic are mercenary, so they are in the public and anyone can mine to their pool, seeing the chain.
If you change the way a block is hashed together, you could potentially even form a merkle tree, ideally with enough data to calculate the difficulty of each block.
So prev_block_hash becomes a node in a merkle tree, prev_block_mr = Hash(prev_prev_block_mr, other_blob_hash) and is recursive.
Then stratum blobs are a short merkle tree of prev_block_mrs with enough data to know the strength of the chain, even if they don't have the full blocks. The non-selfish pools can mine on the chain, even though they don't have it.
If an attacker withholds, and mines their own version of N+2, then what will happen? Detectives will mine on top of attacker's N+2, or their own N+2?
Detectives always target the attacker’s latest leaked tip. If the attacker withholds and privately finds their own N+2, your sensors will see the pool’s jobs switch to prevhash = that N+2, and you switch to mining N+3 on top of it.
The reason is:
- You cannot profitably extend your own N+2, because you do not have the attacker’s hidden N+1. Any N+3 you mine on your own N+2 will remain unprovable until the attacker reveals N+1.
- By targeting the attacker’s current private tip every time it advances, you keep creating a child that becomes valid the moment they reveal. To realise any reward on their private branch, they must publish the missing parent(s), which collapses their lead.
- If they still refuse to reveal, they are burning their own N+1 (and possibly N+2) rewards while you have only diverted a bounded fraction of hash for a bounded time.
So here's the sequence:
- Public at N. Attacker withholds N+1a.
- Detective mines and broadcasts N+2d on top of N+1a. It waits as an orphan until N+1a appears.
- Attacker withholds further and finds N+2a. Their jobs now reference N+2a.
- Detectives pivot to mine N+3d on top of N+2a.
- To get paid, the attacker must reveal N+1a (and N+2a if they have it). Once revealed, your N+2d/N+3d immediately contend, cutting their advantage. If they do not reveal, they forfeit those rewards.
This is with known dark pools. How do we counter unknown pools?
You cannot preempt a pool you cannot see. Detective mining only triggers when a pool leaks its private tip in Stratum jobs. But any pool that recruits external hash leaks by design, so the counter is to maximise sensor coverage. A perfectly closed, in-house farm can hide, but getting to ~34 percent that way is operationally very hard - AND we'd see the global / unknown hashrate increasing and have nothing to attribute it, which is a totally different threat to the one we've seen recently.
If another threat emerges that is trying to get 34% through bribery / merge-mining, it's easy to get "on the inside" with them and get the pool address, even if it's not publicly disclosed.
If the attacker withholds and privately finds their own N+2, your sensors will see the pool’s jobs switch to prevhash = that N+2, and you switch to mining N+3 on top of it.
The sensors won't recognize N+2, they will just see some unknown prev_id in the block template. Only the miner who found the winning nonce will recognize N+2.
This is an important distinction because the "detective miners" need to make sure that the secret chain actually starts from the known chain tip, otherwise they might accidentally help a 51% attacker build a parallel chain. This is a problem because the Monero block template doesn't include the block height.
Even if you can somehow ensure you are advancing the chain, the detective N+2 block would probably need to be empty to avoid accidental double spends because the set of transactions included in N+1 is unknown.
nodes that see a block whose parent is unknown don’t attach it; they request or await the missing parent and hold the child as an orphan until the parent arrives
A quick look at the source code suggests that monerod will currently reject blocks with an unknown parent. However, this could be changed.
https://github.com/monero-project/monero/blob/389e3ba1df4a6df4c8f9d116aa239d4c00f5bc78/src/cryptonote_core/blockchain.cpp#L2122-L2126
At a minimum it looks like a really good way to detect a potential selfish mining reorg in progress.
I wonder about the incentives of an attacker like Qubic, who might not be mining with profit in mind. (In their case, likely considering mining as a marketing expense for their token). But more generally, let's say that an attacker's goal is network disruption. They could simply withold the parent indefinitely, regardless of profit, and cause chain delays. If they do this enough times over 720 blocks, it might even cause difficulty adjustment swings.
If the attacker withholds and privately finds their own N+2, your sensors will see the pool’s jobs switch to prevhash = that N+2, and you switch to mining N+3 on top of it.
The sensors won't recognize N+2, they will just see some unknown prev_id in the block template. Only the miner who found the winning nonce will recognize N+2.
This is an important distinction because the "detective miners" need to make sure that the secret chain actually starts from the known chain tip, otherwise they might accidentally help a 51% attacker build a parallel chain. This is a problem because the Monero block template doesn't include the block height.
Even if you can somehow ensure you are advancing the chain, the detective N+2 block would probably need to be empty to avoid accidental double spends because the set of transactions included in N+1 is unknown.
nodes that see a block whose parent is unknown don’t attach it; they request or await the missing parent and hold the child as an orphan until the parent arrives
A quick look at the source code suggests that monerod will currently reject blocks with an unknown parent. However, this could be changed.
https://github.com/monero-project/monero/blob/389e3ba1df4a6df4c8f9d116aa239d4c00f5bc78/src/cryptonote_core/blockchain.cpp#L2122-L2126
Great points. Three clarifications and two concrete safeguards.
-
Sensors do not need to "recognize N+2" by hash, they only need to see that a pool's job prevhash switches from the public tip to an unknown hash. That flip is the signal a private child exists. In Stratum v2 this flip is explicit via
SetNewPrevHash. -
We can avoid helping deep reorgs by enforcing "adjacency" - ie. act only when you have continuity that the leak is the immediate child of the public tip, not some far-back ancestor. I think the way to do this is through two simple rules:
- You observed the same pool issuing jobs on the public tip, then within a short window it switched to an unknown prevhash. Treat that unknown as N+1 of the current tip for that pool.
- Prefer job streams that include a height field and require
height == public_height + 1before acting. afaik most XMR pools include height in the job payload already?
If a pool does not expose height, fall back to the continuity rule above and require multiple corroborating sensors before diverting any hash. For extra safety, start with a tiny detective fraction and auto-backoff if the signal diverges.
- Unknown-parent children and Monero today: you're right that current monerod tends to reject unknown-parent blocks rather than caching them. That weakens the pure "pre-propagate the child" angle, but it does not break the incentive: the attacker still has to reveal their parent to realise revenue, and when they do you immediately submit your child. A separate improvement would be to accept and hold unknown-parent children as orphans for later attachment.
NOTE: we don't need to have such a thing running globally on the network, we could make that code change and pools could run it and peer with each other, ensuring that they get pre-propagated child blocks even if the network as a whole doesn't all have them.
Two other operational safeguards we can add:
-
Something like an "adjacency gate" - only divert when we have continuity evidence from the same pool: jobs on the public tip followed immediately by jobs with an unknown prevhash, plus share-acceptance on that job, and quorum across multiple sensors. If the unknown prevhash appears without that sequence, we do not divert.
-
Empty detective blocks by default - build the detective child with only the coinbase to avoid any accidental double spends, since the N+1 transaction set is unknown. Thanks to tail emission, the base reward is fixed at 0.6 XMR, so empty blocks are clean and predictable.
In terms of the net effect, even if the attacker keeps privately extending on top of our published child, they cannot capture that middle reward and must still reveal to get paid. With adjacency checks, quorum, and empty detective blocks, we avoid aiding a deep parallel chain while keeping the attacker's expected gain down and their time to reveal short.
If useful, we can also ask that xmrig etc. add a pool-side requirement to include height in job messages.
At a minimum it looks like a really good way to detect a potential selfish mining reorg in progress.
I wonder about the incentives of an attacker like Qubic, who might not be mining with profit in mind. (In their case, likely considering mining as a marketing expense for their token). But more generally, let's say that an attacker's goal is network disruption. They could simply withold the parent indefinitely, regardless of profit, and cause chain delays. If they do this enough times over 720 blocks, it might even cause difficulty adjustment swings.
Totally agree on the first line: detective mining is an excellent early warning.
On a disruption-motivated attacker who withholds indefinitely:
-
Detective mining is still useful because it caps the size of any eventual reorg. If the attacker ever reveals, our pre-mined children on their leaked tips attach immediately, so their reveal collapses to at most a 1-block lead instead of a multi-block jump. That reduces the blast radius of each attempt, even if the attacker is not profit-seeking. ALSO it gives operators live telemetry to coordinate responses in real time (raise confs, re-peer, temporarily reallocate hash away from suspected pools).
-
Detective mining can't force a reveal; if an attacker is willing to burn money to slow blocks, they can withhold forever. The effect is a temporary slowdown (as you highlighted) until difficulty readjusts, but to my mind this is no different from an attacker bringing hashrate on and off just to mess with the difficulty.
I think the bottom line here is that if the attacker is willing to light money on fire, they can slow blocks proportional to their private hash until difficulty catches up. Detective mining will not stop that, but it prevents those withheld blocks from being converted into profitable or deep reorgs, and it gives the ecosystem early, actionable signals to blunt the operational impact.
This is an important distinction because the "detective miners" need to make sure that the secret chain actually starts from the known chain tip, otherwise they might accidentally help a 51% attacker build a parallel chain. This is a problem because the Monero block template doesn't include the block height.
I have to correct myself here. The block template indirectly commits to the height in the miner transaction, so accidentally creating a valid block at the wrong height is not possible. At most, the malicious pool could deceive the detective miners into wasting their hashrate.
I'm not sure if this approach would really help in the current situation.
If we take the recent multiblock reorgs - say 8 blocks for example. I'm not sure I get how the detective mining will help that much. My question is how would the outcome of that particular event have changed if detective mining had been in effect.
Assume that:
- The selfish miner ignores any detective mined blocks
- No reduction in hashrate on the public chain due to detective mining hashrate being redirected
- Detective mining hashrate is equal to the selfish miner
Assumption 2 is unrealisticly optimistic but we can then assume that the competition between the private and public chains turn out identically as in the actual reorg which took place.
I believe assumption 3 is probably too optimistic for the detective miner hashrate but it's just there to have something to work with.
Outcome:
- The selfish miner and public chain get built on as before up until the moment the selfish miner tries to publish its own 8 blocks at once
- At most 7(?) detective mined blocks are produced in the lead up to publishing the 8 block reorg. These blocks do not build on top of each other
- Since the detective blocks do not build on top of each other, the earlier ones e.g. built on the first private block are almost certainly going to be rejected because they form an alternative chain extension of length 1 only.
- Only the later detective blocks have a realistic chance of being accepted
- End result is still an 8 block reorg, but the detective miners might now be the tip.
- The selfish miner loses 0.6 xmr from what they would have got.
This is a positive outcome, but in reality the detective mining will reduce the hashrate on the public chain, slowing it down and therefore the loss of the tip of the reorg (not a certainty) is partially, or fully compensated for.
Am I missing something about how alternative chains are measured against each other?
@AwfulCrawler detective mining helps before the reveal by cutting off private runs, and at reveal it interleaves detective blocks into the attacker's sequence, so long multi-block reorgs become much less likely and often much shallower.
What you might be missing:
-
Chain selection is by cumulative difficulty, not pure block count. At reveal time, each height N+k can land either the attacker's private block or the detective's pre-mined child; whichever lands first and then gets extended contributes that height's weight to the winning branch. That interleaving breaks the attacker's consecutive run.
-
Detective blocks are per-height tripwires, not a new chain. In other words, they do not stack on each other by design. They "booby trap" each private parent: when the attacker reveals N+1, the detective child for N+2 can attach immediately and deny the attacker that height. If detectives caught multiple private tips along the way, this can repeat at N+3, N+4, etc., fragmenting the reveal.
-
The big effect is probabilistic run-length collapse. So every time the attacker extends privately, detectives are racing to find a child on that same hidden tip. That adds a new transition that keeps returning the system to parity, which is why the paper's model shows long private runs become rare and the attacker's edge shrinks quickly as detective share grows. In other words, the "8-block" scenario becomes far less likely to form in the first place.
-
Today, monerod typically rejects unknown-parent children, so even absent any change to this, detectives would resubmit their N+2, N+3, etc. immediately when N+1 appears. You then get a race at each height. Even a few detective wins break the attacker's consecutive sequence and reduce the realised reorg depth. A daemon option to cache unknown-parent children would strengthen this further.
-
Re: "attacker loses only 0.6 XMR", if detectives win ANY heights, the attacker loses that subsidy and any fees for those heights. Empty detective blocks are a safety choice to avoid accidental double spends when N+1's tx set is unknown; they still deny the attacker that height's revenue. In future FMCP might allow some conservative tx inclusion, but I don't know that's even necessary.
About the three assumptions:
- Assumption 1, attacker ignoring detective blocks is fine to assume - detectives still deny heights at reveal.
- Assumption 2, no public-chain slowdown is conservative for the defender. In practice a pool diverts a small fraction first and ramp only with corroboration, keeping public production healthy.
- Assumption 3, detective hash equal to attacker is generous, but the paper shows you DO NOT need hashrate parity; even modest detective share materially cuts the attacker's edge and the odds of long runs.
Basically, under detective mining, an 8-block reveal attempt rarely assembles as 8 clean attacker blocks. Pre-mined detective children at multiple heights fragment the reveal and reduce the realised reorg, while the ongoing race during withholding makes long private runs statistically hard to build. That is exactly the mechanism quantified in the detective-mining paper, and would have totally rekt this "attack".
You're right I am not 100% on how the cumulative difficulty is used to select block candidates and this might be what made me confused.
I was going off the assumption that if e.g. the first detective block outscores the 2nd attacker block, then it wouldn't matter that much since there are 6 more attacker blocks to go adding cumulative difficulty to the attacker's proposed chain, whereas there is only the one detective block (with no blocks on top) to compete with that.
If, however, the 1st detective block outscoring the 2nd attacker block could cut it off, this becomes a much better situation.
I need to take a closer look at the paper and get a better idea of how the alternative chains compete before taking any more space up here, but your response gives me something to go off of, thanks.
In combination with this, when there's a tip race honest miners should choose to work on the tip with the best timestamp. The "detective tip" will have a good timestamp because it's revealed immediately, but the attacker's tip doesn't because he assigned the timestamp and solved the block before he knew when he would have to release it. This reduces the attacker's "gamma" below 0.5 (below neutral tie-breaking).
Releasing a bunch of blocks at once means he's using a "stubborn" selfish mining tactic (there are 3 types).
If you know there's an attack, there's another mitigation technique: copy the attack. This cancels each other's gains. If they're both 33%, the remaining "agnostic" 33% who's releasing blocks immediately come out the same as the attackers. I don't know if this is better than detective mining or not.
@fluffypony
at reveal it interleaves detective blocks into the attacker's sequence
This is simply not true. The attacker mines only on top of their blocks, so all the detective blocks will be orphaned. Except the very latest one, at the tip.
This is simply not true. The attacker mines only on top of their blocks, so all the detective blocks will be orphaned. Except the very latest one, at the tip.
Sorry I could have worded that better - you're right that the attacker never mines on detective blocks. "Interleaving" does not mean the attacker builds on our blocks. It means that at reveal time each contested height has 2 competing children - the attacker’s child and the detective’s child - and the network adopts whichever child for that height propagates first and then gets extended. Some heights resolve to A, others to D, so the final accepted sequence often alternates.
Just to make it as clear as possible:
- Before reveal, detectives mine a child on each leaked private parent. They target the attacker's current private tip, not their own chain.
- At reveal of N+1a, there are now two valid candidates for N+2: the attacker's N+2a and the detective's N+2d. Nodes will accept whichever they see first and then extend. Same at N+3, N+4, and so on if detectives have pre-mined those children of the attacker's leaked tips.
- The attacker "only mining on their blocks" does not orphan detective children by fiat. Orphaning is decided by arrival and extension at each height under cumulative difficulty. If comparable hash races those heights, some fall to A and some to D - that is the interleaving we mean.
- If detective hash is similar to the attacker's, the chance the attacker wins all k heights in a run is roughly 0.5^k. Long clean reveals become vanishingly likely. Even a few detective wins break the consecutive run and shrink realised reorg depth.
Put differently: detectives do not help the attacker. They pre-position rival children on the attacker's hidden parents so that, at reveal, some contested heights resolve to the defender rather than the attacker. That fragments the reveal and makes deep, clean reorgs much harder to land. The more hashrate is given to detective mining by pools, the faster these blocks happen, and the worse it is for the attacker.
Another technique is for honest nodes to ignore reorgs for $\sqrt{N}$ block times where N is the size of the reorg. This requires the reorg tip to prove it was on the >50% hashrate side of the network partition by ~1 standard deviation, or that it had that amount of luck. It delays repairs in an "honest" network partition and hurts a miner if the majority of the hashrate doesn't follow the rule. Of course a node that was off line doesn't follow the rule when it comes online because he already knows there was a "network partition" by being offline. Similarly, if nodes know there was a partition by there being a long delay between blocks, they ignore the rule according to some additional rule. Selfish mining works because nodes are switching tips without any good "proof" that the new tip has the Proof of Work lead (i.e. that it had the majority hashrate).
some contested heights resolve to the defender rather than the attacker
I still don't quite agree. What I've seen in reality, is that the attacker gets a comfortable lead over other pools (2 blocks), and keeps mining as long as they have this lead. As soon as the lead reduces to 1 block, they publish their chain. This is how they reached 8-block reorg - due to pure luck when they got a 4-block lead in the beginning.
So when they release block N+8a, network will see block N+8a, N+7a, ..., N+1a and will switch to it because network is only at N+7. It's impossible to have the interleave "N+8a, N+7d, N+6a, N+5d, ..." becase N+8a points to N+7a, not N+7d as its parent.
At reveal of N+1a, there are now two valid candidates for N+2: the attacker's N+2a and the detective's N+2d. Nodes will accept whichever they see first and then extend.
If nodes chose the tip with the most accurate timestamp, the attacker loses. (on average)
For the case of 33% attacker and 33% detective, the attacker's gamma reduces from 0.25 for detective alone to 0. He has 33% chance of winning instead of 50%.
Is there any scope for this:
Allow missing (or dummy) monero blocks temporarily, if the difficulty and number of transactions can be verified.
Basically, if the randomx via nonce is verifiable, then nodes can add a dummy block, which consists of only the hash, nonce and prev_hash, and continue.
So if you know enough of the prev_header, non-selfish mining pools could build on top of this dummy block (or chain of dummy blocks), knowing they will be accepted, even if the selfish pool never publishes the block.
UTXOs can't be used for 10 blocks anyway, so the data isn't needed. Then, if the data is still missing, the block should mark it as missing in the n+10'th block. At this point, any data in the block is lost, but the chain can continue.
I think if you could change the stratum hashing blob enough, you could force the selfish pool to reveal enough of this data to build a chain with a dummy block.
@zawy12 Thanks for the ideas. Some quick thoughts on it:
- Tie-breaking by timestamp during a tip race
- Pools can choose a local policy to prefer the block whose timestamp is closest to now when two tips are neck-and-neck. That can reduce the attacker’s gamma in practice because honest blocks generally propagate immediately, while withheld blocks often have weirder timestamps.
- Caveat: timestamps are miner-chosen within bounds. In Monero a block must be ≥ median of the last 60 and ≤ now + 2 hours, so an attacker can game toward the limit. Use timestamp only as a soft tie-breaker at pools, not as a consensus rule.
- Releasing many blocks at once = stubborn strategy
Correct - that matches the stubborn mining family generalised beyond the original Eyal-Sirer SM1 strategy. The literature shows stubborn variants can be profitable in some regions, which is why cutting private runs is valuable.
- "Copy the attack" to cancel gains
Two attackers do tend to cannibalise each other - see the Miner’s Dilemma - but this still degrades network health and honest revenue, and normalises withholding as a tactic. Detective mining is preferable because it uses leaked prevhash to collapse private leads without honest miners withholding their own blocks.
- Ignore reorgs for sqrt(N) block times
That is effectively a reorg-delay/finality rule. It would be a consensus-level behaviour change with liveness risks during genuine partitions and creates timing games with timestamps. Monero’s chain selection is cumulative difficulty, not timestamps or wait rules, so adopting this would be a much bigger change than pool-side detective mining.
Detective mining still helps in your scenarios:
- It adds a per-height tripwire: at each leaked parent there is a pre-mined competing child ready to race when the parent appears, which statistically breaks long clean runs that stubborn strategies rely on. That is exactly why selfish-miner edge shrinks as detective share grows in the models.
- All of this is deployable at pools or Stratum proxies - no protocol changes needed.
At the end of the day, timestamp-weighted tie-breaks can be a small, a pool-local nudge; "copy the attack" hurts everyone; reorg-delay rules are heavy and risky. Detective mining directly targets the attacker’s private lead with pool-only changes and plays well with faster propagation and conservative pool tie-breaking.
I still don't quite agree. What I've seen in reality, is that the attacker gets a comfortable lead over other pools (2 blocks), and keeps mining as long as they have this lead. As soon as the lead reduces to 1 block, they publish their chain. This is how they reached 8-block reorg - due to pure luck when they got a 4-block lead in the beginning.
So when they release block N+8a, network will see block N+8a, N+7a, ..., N+1a and will switch to it because network is only at N+7. It's impossible to have the interleave "N+8a, N+7d, N+6a, N+5d, ..." becase N+8a points to N+7a, not N+7d as its parent.
I think we are talking past each other on what "interleave" means.
- Interleaving does not require the attacker to build on detective blocks. It is about what each node accepts at each height when the attacker reveals.
- Reveal is not atomic. Even if the attacker dumps N+1a...N+8a rapidly, every node still processes a sequence: sees N+1a, then later sees some N+2 child, then later N+3, etc. There is always a gap between those arrivals across the network.
- Detectives use that gap. The moment N+1a appears, we resubmit N+2d. Some peers will see N+2d before N+2a and adopt it. On those peers, N+3a is then invalid until they later see N+2a and consider a reorg. Meanwhile honest hash on those peers can extend N+2d. Repeat at later heights if we have N+3d, N+4d, ...
- The attacker cannot force all peers to receive N+2a before N+2d. Gossip and validation are sequential and latency is heterogeneous. With comparable detective hash, the chance the attacker wins every contested height falls exponentially with run length. That is what I mean by "interleaving" outcomes at reveal.
- Your example "N+8a points to N+7a, not N+7d" is correct as a block header fact, but beside the point. Interleaving is not about what the attacker mined. It is about which child of each height lands first and then gets extended by enough hash to become the accepted history on that part of the network (ie. CUMULATIVE PoW difficulty, which the attacker cannot sustain at their <51% hashrate).
Detective blocks are intentionally independent per height. They are tripwires that deny the attacker specific heights at reveal. Even a few wins break a long clean sequence and reduce realised reorg depth.
The attacker does not orphan detective blocks "by definition". At reveal there is a real race at each height between A's child and D's child. Some heights resolve to A, some to D, which fragments what would otherwise be a clean 8-block reveal.
Is there any scope for this:
Allow missing (or dummy) monero blocks temporarily, if the difficulty and number of transactions can be verified.
Interesting idea, but on Monero this would be a protocol change with some safety issues:
- PoW commits to more than prev_hash and nonce. The Monero block hash is computed over a "block hashing blob" that includes the block header, the Merkle root of the block's transactions, and the transaction count. You cannot verify or later reconcile PoW without the tx root and count.
- You cannot validate the coinbase without the full block. Monero checks the miner transaction against the emission schedule and the dynamic block weight penalty, which depends on the recent median block weight and the actual block weight and fees. That requires the full tx set.
- "UTXOs can't be used for 10 blocks anyway" is something that will likely go away with FMCP and some of the other protocol changes coming down the pipe.
- Accepting headers without data is a DoS and inflation risk. Invalid coinbases, oversized blocks, or bad txs could be locked in without anyone being able to verify or unwind deterministically. It would also partition the network between nodes that accept placeholders and those that do not.
- Changing Stratum does not help here. Stratum is a pool protocol. It cannot force consensus nodes to accept incomplete blocks, and an attacker can run a custom pool anyway.
- Current monerod behavior aligns with this: blocks whose parent is unknown or whose data is missing are rejected or treated as orphaned, not inserted as placeholders.
Admittedly I think the last point should change, and is relatively easy to do so.
Changing Stratum does not help here. Stratum is a pool protocol. It cannot force consensus nodes to accept incomplete blocks, and an attacker can run a custom pool anyway.
What I meant here is to change the input (block_hashing_blob) into RandomX in such a way that all the parameters needed to construct a dummy block must be published by the malicious pool via Stratum, hence revealing enough to construct a valid chain
What I meant here is to change the input (block_hashing_blob) into RandomX in such a way that all the parameters needed to construct a dummy block must be published by the malicious pool via Stratum, hence revealing enough to construct a valid chain
I get what you're aiming for, but I don't think this can be done at the Stratum layer without changing consensus.
- RandomX's input is fixed by consensus to the "block hashing blob" which already includes more than just prev_hash and nonce. It is the serialised block header plus the Merkle root of the block's transactions and the transaction count. Pools can't redefine that via Stratum; if they did, the result wouldn't validate.
- Even if a pool publishes the Merkle root and tx count in jobs (they effectively do, because miners hash the blob), nodes still must see the actual transactions to verify the coinbase amount, block weight penalty, and fee accounting. You can't safely "continue on dummy headers" in Monero - full validation requires the tx set (otherwise it's a massive DoS risk, at the very least).
- Header-only acceptance would be a protocol change with serious risks: nodes could be led to extend an invalid chain (bad coinbase, oversized block, bad tx set) they cannot verify yet. That's why monerod rejects unknown-parent or incomplete blocks today rather than inserting placeholders.
- A malicious pool could also fabricate Merkle roots in job blobs to steer honest hash, then never reveal matching transactions. Stratum can't police that - only full block validation can.