Transaction State Inconsistency After Blockchain Reorg - Need Chainhook Reorg Guidance
Summary
Transaction showing inconsistent states across Hiro Explorer pages after blockchain reorganization. Originally confirmed via chainhook at BTC block 909839, now stuck in mempool for 6+ hours with conflicting status displays.
Transaction Details
- TX ID:
9eb32e3178fef43989906f91980643528775f91d4173fa914250c8da082dbf04 - Original confirmation: BTC block 909839 (via chainhook)
- Current status: Mempool for 6+ hours
- Wallet page: Shows "Confirmed"
- Transaction page: Shows "In Mempool"
- User nonce: Last executed tx nonce 4128 while transaction remains in mempool
Issue Description
After a blockchain reorg, this epoch-dependent transaction displays conflicting states:
- Wallet view indicates confirmation
- Transaction detail view shows in mempool
- Transaction remains stuck in mempool despite previous chainhook confirmation
The transaction involves epoch computation logic, transitioning from epoch 51 → 52 during the reorg, which may be preventing proper re-execution.
Environment
- Chain: Stacks blockchain
- Explorer: Hiro Systems Explorer
- Original block: 909839
Questions for Hiro Team
-
Reorg Event Handling: What specific chainhook events should applications monitor to properly handle blockchain reorganizations?
-
State Synchronization: When reorgs occur, what's the recommended approach for maintaining consistent transaction state between explorer views?
-
Chainhook Configuration: Are there specific chainhook predicates or webhook configurations recommended for detecting and handling reorg events?
-
Epoch-Dependent Transactions: How should applications handle transactions with epoch-dependent logic that get caught in reorgs when the epoch changes?
-
Rollback Events: Does chainhook provide specific rollback events or notifications that applications should listen for to update their local state?
This inconsistency affects user experience and application reliability. Guidance on proper chainhook-based reorg handling would help prevent similar issues. Thank you in advance.
I've assumed it may have been a reorg but I may be wrong.
@rafaelcr would you be able to help with this one please?
I've received other similar reports which indicate it could be something around API cache, I'll move this over to the API repo so we can investigate