blockbook
blockbook copied to clipboard
Dropped block is returned when querying by block height
We've encountered some odd behaviour, likely a bug, on our testnet bitcoin blockbook node. At height 1983537 there were two conflicting blocks mined:
- canonical: 000000000c12def651a4482b2ffad3d4daade8188d111b242ff154c5712659c1
- orphaned: 00000000003799c7775a1d2d5dbb3366b2b86a1f3a85c4f99a17ad69017a6e7f
Both of them are indexed and queryable on blockbook by hash. However when querying by height our node returns the orphaned block.
Examples
GET /api/block-index/1983537
Expected:
{
"blockHash": "000000000c12def651a4482b2ffad3d4daade8188d111b242ff154c5712659c1"
}
Actual:
{
"blockHash": "00000000003799c7775a1d2d5dbb3366b2b86a1f3a85c4f99a17ad69017a6e7f"
}
GET /api/block/1983537
Expected:
{
"page": 1,
"totalPages": 1,
"itemsOnPage": 1000,
"hash": "000000000c12def651a4482b2ffad3d4daade8188d111b242ff154c5712659c1",
"previousBlockHash": "000000000fc513146056b6737b16f0b10388514ac3ded5c4ceefb2bbc0c36f9b",
"nextBlockHash": "00000000039fda82da39890884cface690d609752da741303bfbec498934854b",
"height": 1983537,
"confirmations": 51181,
"size": 283,
"time": 1622639189,
"version": 536870912,
"merkleRoot": "706e1823fe8b534f948ef884416b29a3354771f6b5abb1963fea573b50faf86a",
"nonce": "4110462279",
"bits": "1c0ffff0",
"difficulty": "16",
"txCount": 1,
"txs": <omitted>
}
Actual:
{
"page": 1,
"totalPages": 1,
"itemsOnPage": 1000,
"hash": "00000000003799c7775a1d2d5dbb3366b2b86a1f3a85c4f99a17ad69017a6e7f",
"previousBlockHash": "000000000fc513146056b6737b16f0b10388514ac3ded5c4ceefb2bbc0c36f9b",
"nextBlockHash": "00000000039fda82da39890884cface690d609752da741303bfbec498934854b",
"height": 1983537,
"confirmations": -1,
"size": 275,
"time": 1622639190,
"version": 536870912,
"merkleRoot": "96d8bf858287799c4847912f14942e9b672ce7d3250632959a3386428ca33f00",
"nonce": "1336915572",
"bits": "1c0ffff0",
"difficulty": "16",
"txCount": 1,
"txs": <omitted>
}
Could this be caused by a race condition where the first seen block gets cached for the height without being updated when orphaned?
Version: 0.3.4 / 1f6cddd / 2021-04-15T15:52:05+00:00
This issue is still present after upgrading to 0.3.5 / 491580b / 2021-07-19T20:31:34+00:00
It seems that Blockbook has indexed the orphaned block. In the version 0.3.4
, there was an error (however in earlier version than the time of your build), which in rare circumstances could exhibit such behavior. It is fixed in 0.3.5
. I would recommend to reindex the db to fix the issue.
Is there any way to force it to refresh it's index for that particular height? Everything else is working normally, the full chain and best block are all correct. The "previousBlockHash" and "nextBlockHash" of the child/parent blocks both point to the correct one, not the dropped one. This appears to only affect the logic that resolves height 1983537 to a hash.
Unfortunately not. However the full reindex takes only a couple of hours, it is by far the easiest way to go.
Thanks for the responses. After resyncing the issue appears resolved
@martinboehm The issue has occurred again at bitcoin testnet block 2077282. Our blockbook instance thinks this is the hash at that height
0000000000043c3b3efcd490f9381d0151bf292f81726d6d8b43675b30c215d2
when in reality it's
00000000001fe49633c4a0d50afc5414822aa59e398df64a4391e52c81dc6966
I could just resync but this suggests there's a real issue with the latest version
Version 0.3.5 / 491580b / 2021-07-19T20:31:34+00:00
Some relevant logs:
> zgrep -h -B2 -A 2 2077282 /opt/coins/data/bitcoin_testnet/backend/testnet3/debug.log*
2021-08-28T06:02:48Z UpdateTip: new best=000000000021f996fbda004650c17320f98a76d062fa59b138a53997cf655c8d height=2077280 version=0x20000000 log2_work=74.456698 tx=60855937 date='2021-08-28T06:02:47Z' progress=1.000000 cache=2.3MiB(15670txo)
2021-08-28T06:02:49Z UpdateTip: new best=00000000000a50e5eb5bde0742f8da80f35693c006f06a17eb54c96ecbb6a5c8 height=2077281 version=0x20000000 log2_work=74.456698 tx=60855938 date='2021-08-28T06:02:48Z' progress=1.000000 cache=2.3MiB(15671txo)
2021-08-28T06:02:49Z UpdateTip: new best=0000000000043c3b3efcd490f9381d0151bf292f81726d6d8b43675b30c215d2 height=2077282 version=0x20c00000 log2_work=74.456698 tx=60855939 date='2021-08-28T06:02:49Z' progress=1.000000 cache=2.3MiB(15672txo)
2021-08-28T06:02:50Z UpdateTip: new best=00000000000a50e5eb5bde0742f8da80f35693c006f06a17eb54c96ecbb6a5c8 height=2077281 version=0x20000000 log2_work=74.456698 tx=60855938 date='2021-08-28T06:02:48Z' progress=1.000000 cache=2.3MiB(15672txo)
2021-08-28T06:02:50Z UpdateTip: new best=00000000001fe49633c4a0d50afc5414822aa59e398df64a4391e52c81dc6966 height=2077282 version=0x20000000 log2_work=74.456698 tx=60855939 date='2021-08-28T06:02:48Z' progress=1.000000 cache=2.3MiB(15673txo)
2021-08-28T06:02:50Z UpdateTip: new best=000000000020ab82e72fe963f8406171ac752b0cc8da41ccbcee8cd06f0a4d7a height=2077283 version=0x20000000 log2_work=74.456698 tx=60855940 date='2021-08-28T06:02:49Z' progress=1.000000 cache=2.3MiB(15674txo)
2021-08-28T06:02:50Z UpdateTip: new best=0000000000225dfaba526b1d24b49083438bffe8fe60589b074e989adc8ecfcf height=2077284 version=0x20000000 log2_work=74.456698 tx=60855941 date='2021-08-28T06:02:50Z' progress=1.000000 cache=2.3MiB(15675txo)
> grep -h -e 0000000000043c3b3efcd490f9381d0151bf292f81726d6d8b43675b30c215d2 -e 00000000001fe49633c4a0d50afc5414822aa59e398df64a4391e52c81dc6966 /opt/coins/blockbook/bitcoin_testnet/logs/*
I0828 06:02:50.435923 20620 socketio.go:721] broadcasting new block hash 0000000000043c3b3efcd490f9381d0151bf292f81726d6d8b43675b30c215d2 to 0 channels
I0828 06:02:50.435949 20620 websocket.go:848] broadcasting new block 2077282 0000000000043c3b3efcd490f9381d0151bf292f81726d6d8b43675b30c215d2 to 3 channels
It appears that there was a reorg and blockbook more or less did respond correctly, it just fell short of updating the correct block height -> hash index.