bitcore-node syncing bitcoin full node to mongodb is too slow
my machine :
- CentOS 7
- RAM: 50G
- CPU: 24 core
- disk: 500G SSD
I have a bitcoin mainnet full node which was fully synced. But bitcoire-node is too slow , only 11 blocks/min !!
How to speed up the block sync??
{"message":"2020-06-29 19:14:10.715 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.68 blocks/min | Height: 401598","level":"info"}
{"message":"2020-06-29 19:14:12.293 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.71 blocks/min | Height: 401599","level":"info"}
{"message":"2020-06-29 19:14:38.298 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.56 blocks/min | Height: 401600","level":"info"}
{"message":"2020-06-29 19:14:43.165 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.56 blocks/min | Height: 401601","level":"info"}
{"message":"2020-06-29 19:14:48.810 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.56 blocks/min | Height: 401602","level":"info"}
{"message":"2020-06-29 19:15:06.249 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.48 blocks/min | Height: 401603","level":"info"}
{"message":"2020-06-29 19:15:11.651 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.48 blocks/min | Height: 401604","level":"info"}
{"message":"2020-06-29 19:15:14.942 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.49 blocks/min | Height: 401605","level":"info"}
{"message":"2020-06-29 19:15:17.406 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.51 blocks/min | Height: 401606","level":"info"}
{"message":"2020-06-29 19:15:33.349 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.44 blocks/min | Height: 401607","level":"info"}
{"message":"2020-06-29 19:15:41.428 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.42 blocks/min | Height: 401608","level":"info"}
{"message":"2020-06-29 19:15:45.890 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.43 blocks/min | Height: 401609","level":"info"}
{"message":"2020-06-29 19:15:48.461 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.44 blocks/min | Height: 401610","level":"info"}
{"message":"2020-06-29 19:15:55.118 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.43 blocks/min | Height: 401611","level":"info"}
{"message":"2020-06-29 19:15:57.633 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.45 blocks/min | Height: 401612","level":"info"}
{"message":"2020-06-29 19:16:04.887 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.44 blocks/min | Height: 401613","level":"info"}
{"message":"2020-06-29 19:16:12.530 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.42 blocks/min | Height: 401614","level":"info"}
{"message":"2020-06-29 19:16:18.598 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.42 blocks/min | Height: 401615","level":"info"}
{"message":"2020-06-29 19:16:31.430 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.37 blocks/min | Height: 401616","level":"info"}
{"message":"2020-06-29 19:16:35.384 GMT+8 | Syncing... | Chain: BTC | Network: mainnet | 11.38 blocks/min | Height: 401617","level":"info"}
The server you are running is dedicated hardware or shared resources.
@youngqqcn your sync is VERY VERY GOOD!!! i'm stuck on block height 332000 with ~0.2 blocks/min (so 1 block every 5 minutes!!!) i'm syncing bitcore by one month ago 🤕
my machine: ubuntu 18.04 ram 20gb cpu 6 core (always overloaded at 100%) disk 1400gb
Yes, this syncing is definatly a problem! In the old situation where the node was integrated with a txindex option, the syncing only took about 14 hours. Now at best speed you can get 62 blocks/min, but thats burst ... on average its more like 48 blocks/min on enterprise hardware (32 cores, 256GB ram, Raid Z2 SSD pool backend) with heaving tuning. Based on this you will need 11 - 12 days for it to sync, assuming it does not slow down moving forward as there are more transactions/data to be processed.
Looking at both CPU and MEM, they are not even maxed out in usage, so thats not the bottleneck. The SSD pool is at a IO/load of only 13%, so no issues there. I would love to have developer have a look at the performance if they switched from mongo to elasticsearch or even try moving from singleton to bulk queries.
or maybe just move back to the old way of live an ask the fullnode what the data is ... at least that perform a LOT better.
@BoGnY @youngqqcn I have noticed that the node sync can go quicking if you:
- add more trusted nodes at least 20
- restart the node every few hours, over time the sync slows down
- tune linux + mongo
for me, i was able to get to block 317359 in less then 2 days, but looking at the total progress:
2372.63 blocks/min | Height: 50000","level":"info" 968.58 blocks/min | Height: 120000","level":"info" 373.09 blocks/min | Height: 170000","level":"info" 104.19 blocks/min | Height: 220000","level":"info" 56.08 blocks/min | Height: 270000","level":"info" 32.83 blocks/min | Height: 310000","level":"info"
@kruisdraad did you remember when (more or less) the integrated txindex option was removed from node? i'm thinking to use a older version of bitcore to get the full sync...
@BoGnY No, i do have the older version in production and also considering to keep using it, this version is not useable at all.
Have you experimented with changing the maxPoolSize? If you're finding your database write speed isn't being used fully, you may be able to adjust that config value higher.
"maxPoolSize": 20
Also, can you share the output from npm run benchmark from the bitcore-node directory
I get this on my Macbook
Resetting database
Generating blocks
Adding blocks
................................................................................
Benchmark for 80 (1 MB) blocks completed after 43.158 s
1.853654015478011 MB/s
0.539475 Seconds/Block
Resetting database
Generating blocks
Adding blocks
.....
Benchmark for 5 (32 MB) blocks completed after 85.76 s
1.8656716417910446 MB/s
17.152 Seconds/Block
Resetting database
Generating blocks
Adding blocks
.
Benchmark for 1 (64 MB) blocks completed after 22.24 s
2.877697841726619 MB/s
22.24 Seconds/Block
@micahriggan i have been expirmenting a lot, i even pulled a high end system with high specs to test and make sure its not an hardware issue. And to be clear, i am using XFS per recommended MongoDB messages, your documentation does not warn to use that. Many tuning on the server and stuff, but not maxPoolSize. Looking at the documentation, this seems to be a client-side setting and i have found 2 occurances with a default of 50. i have changed them to 250, but this did not change anything.
Mongo does not max on any hardware related item, i think its just the poor sync implementation of the node. It seems to do everything one-by-one and directly puts them into single qeuries which is quite inefficiant. Also the state is kept INSIDE the database, which is taking up many I/O for nothing ... this would be better off inside a redis or another memory based system or synced to disk once every X time during sync.
Currently a VM with 6vCPU and 32GB RAM and SSD backed storage at height 388xxx only doing 15 blocks/min, the high-end system (no VM, pure hardware) is at 320xxx doing 60 blocks/min ... which was about the same when the VM was at that point or just marginally better. So its not a matter how much hardware you throw at this, its just slow cause of the way the node works.
Using an elasticsearch cluster might improve I/O and CPU usage a little thus saving the need to get a lot more power then needed compare to the txindex=1 version. But it will not solve the very slow working of the node. Did a developer / bitpay use this in production? how long did they sync? are there magical settings that are undocumented (most stuff is not documented at all) that might improve stuff?
@kruisdraad I've done a few experiments with in-memory sync, and flushing to the database after it reaches an update threshold. It does improve the speed a little bit. I never reached a satisfactory implementation, but if you'd like to give it a go, that'd be helpful.
https://github.com/bitpay/bitcore/blob/master/packages/bitcore-node/src/models/transaction.ts#L175 This bulk write is the core of saving the bitcoin data, you should see the mongo operations come through in batches, which are written to the database in bulk. If there's a more efficient way to give these updates to mongo, let me know and we can get that fixed.
60 Blocks/ min would be similar to the benchmark my macbook had. So imagine your npm run benchmark would give similar numbers to mine.
The main reason for the slow down @ ~300k blocks is due to the number of transaction being very low in previous blocks. At around that height the blocks became more full, so the blocks/sec drops dramatically.
Bitpay uses this in production as the backend for the Bitpay/Copay wallet. I believe it took around 7 days for a full sync.
I imagine many of the indexes required for that use-case are contributing to the sync being slower, so if you're not planning on running thousands of wallets, you can experiment with removing indexes unrelated to your use-case, or even implementing a storage adapter for elasticsearch
Sync speed is something that we'd love to improve, so if you have some proposals let me know and we can do some experiments.
@micahriggan
The npm benchmark returned slightly better stats, but i dont expect your macbook to be powerfull as this enterprise server.
Well i would suggest elastic backend, i dont think this is a big change as it stores as json documents too and the inner workings are quite similair. To upside is that you can add a 3-node cluster (if your box crashes its stuck in mongodb recovery for a while) and its prolly easier to import/export if you wanted to share an database. You can also spread the diskio as its multiread/write also at an faster rate.
As i said, bigger hardware doesnt give you faster sync speed, so there is some bottleneck somewhere. Per documentation i suggest you recommend using XFS for the mongodb server, as this does help a little.
I have no idea how the current sync setup is programmed nore a js developer, so i dont think i have any usefull proposals, but we could do a sparring session?
Also there is a configuration for the RPC, but i dont see it being used, can you tell me whats its for?
Finally, maybe we could coop to pre-build most of the coin's DB's and provide a torrent as a basis to save everyone a lot of time and effort?
@micahriggan this is the benchmark on my vps (cpu 6 core, ram 20gb, disk 1400gb):
Resetting database
Generating blocks
Adding blocks
................................................................................
Benchmark for 80 (1 MB) blocks completed after 96.128 s
0.8322237017310253 MB/s
1.2016 Seconds/Block
Resetting database
Generating blocks
Adding blocks
.....
Benchmark for 5 (32 MB) blocks completed after 171.916 s
0.9306870797366156 MB/s
34.3832 Seconds/Block
Resetting database
Generating blocks
Adding blocks
.
Benchmark for 1 (64 MB) blocks completed after 52.254 s
1.2247866192061851 MB/s
52.254 Seconds/Block
tomorrow i will retry on a vps with cpu 10 core and ram 60gb
@kruisdraad
My system looks like this, and I've got mongo running on the same host as bitcore-node
Processor Name: Intel Core i9
Processor Speed: 2.9 GHz
Number of Processors: 1
Total Number of Cores: 6
L2 Cache (per Core): 256 KB
L3 Cache: 12 MB
Hyper-Threading Technology: Enabled
Memory: 32 GB
The RPC config is used to get the fee's from bitcoind, that's the only thing I believe. I like the torrent idea, we should do that. In the meantime, I can make an elastic version of bitcore-node, with the use-case of insight/research.
@BoGnY what's the hard drive write-speed for your mongodb server?
Capacity: 1 TB (1,000,555,581,440 bytes)
Model: APPLE SSD AP1024M
Link Width: x4
Link Speed: 8.0 GT/s
I get about 2GB/s on my macbook ssd, and since they're on the same host there's no network overhead.
@micahriggan i have several Dell R620's with 24cores and 192GB 8x SSD i have a dell r720 with 48core and 256GB ram and 24x SSD and a testbox with i5 32GB and 6x SSD
All basically have the same speeds with syncing, it looks like most of the time is spend with mongo trying to send sequintal data, but its not getting faster.
Its likely just a driver replacement with elastic.
my today benchmark on new vps (cpu 10 core, ram 60gb, disk 1600gb) with bitcoin full node and mongodb running:
with bitcore v.8.2.0 (a year ago version):
Restting database
Generating blocks
Adding blocks
................................................................................
Benchmark for 80 (1 MB) blocks completed after 83.449 s
0.9586693669187168 MB/s
1.0431124999999999 Seconds/Block
Restting database
Generating blocks
Adding blocks
.....
Benchmark for 5 (32 MB) blocks completed after 160.633 s
0.9960593402351945 MB/s
32.1266 Seconds/Block
with bitcore v.8.21.0 (latest on master branch):
Resetting database
Generating blocks
Adding blocks
................................................................................
Benchmark for 80 (1 MB) blocks completed after 99.956 s
0.8003521549481771 MB/s
1.24945 Seconds/Block
Resetting database
Generating blocks
Adding blocks
.....
Benchmark for 5 (32 MB) blocks completed after 179.254 s
0.8925881709752642 MB/s
35.8508 Seconds/Block
Resetting database
Generating blocks
Adding blocks
.
Benchmark for 1 (64 MB) blocks completed after 45.503 s
1.4065006702854757 MB/s
45.503 Seconds/Block
within a year, there has been a degradation of bitcore performance corresponding to about 15/20%..
@micahriggan i don't find which is the i/o speed for my sdd on provider site.. but looking at the benchmarks it seems to be more or less half of yours.. considering that I need double time for each block.. the main problem is that, after block 320k, 5 minutes for each block it means that to complete the remaining 300k blocks (without loss of performance), it takes me about ~149 weeks, it is almost 3 YEARS!!!
how you can full sync in about 7 days?!?
@micahriggan looking at my mongo i dont see anything happening differetly when i play with maxpoolsize ... but its not really clear what you mean, where did you set this on your side? Also when do you think you have a elastic drop-in ready? (going on holiday next week)
@BoGnY if you get 5blocks/min at 320k then your hardware is the main issue. i am at 380k on my modified test system at getting 30+ blocks/min ... and on the server that is tuned and at 410k still getting 11,37 blocks/min. If you are talking about your provider, keep in mind that SSD storage is always overbooked and used by multiple people. You cannot compare this with dedicated hardware. Having a LOT of dedicatd SSD's with good hardware raid helps a little ... but the software is ratelimited somewhere.
@kruisdraad Can you make the following change to the bitcoin p2p module, and then run in debug mode with
npm run debug
diff --git a/packages/bitcore-node/src/modules/bitcoin/p2p.ts b/packages/bitcore-node/src/modules/bitcoin/p2p.ts
index e7ad4a574..a671a8cb4 100644
--- a/packages/bitcore-node/src/modules/bitcoin/p2p.ts
+++ b/packages/bitcore-node/src/modules/bitcoin/p2p.ts
@@ -1,4 +1,5 @@
import { EventEmitter } from 'events';
+import { LoggifyClass } from '../../decorators/Loggify';
import logger, { timestamp } from '../../logger';
import { BitcoinBlock, BitcoinBlockStorage, IBtcBlock } from '../../models/block';
import { StateStorage } from '../../models/state';
@@ -10,6 +11,7 @@ import { SpentHeightIndicators } from '../../types/Coin';
import { BitcoinBlockType, BitcoinHeaderObj, BitcoinTransaction } from '../../types/namespaces/Bitcoin';
import { wait } from '../../utils/wait';
+@LoggifyClass
export class BitcoinP2PWorker extends BaseP2PWorker<IBtcBlock> {
protected bitcoreLib: any;
protected bitcoreP2p: any;
After running that for a while, can you hit the performance endpoint http://localhost:3000/api/status/performance
{
"CoinModel::onConnect": {
"time": 2,
"count": 1,
"avg": 2,
"max": 2
},
"XrpTransactionModel::onConnect": {
"time": 0,
"count": 1,
"avg": 0,
"max": 0
},
"EthTransactionModel::onConnect": {
"time": 3,
"count": 1,
"avg": 3,
"max": 3
},
"EthBlockModel::onConnect": {
"time": 2,
"count": 1,
"avg": 2,
"max": 2
},
"XrpBlockModel::onConnect": {
"time": 1,
"count": 1,
"avg": 1,
"max": 1
},
"StorageService::start": {
"time": 5026,
"count": 1,
"avg": 5026,
"max": 5026
},
"EventService::start": {
"time": 16,
"count": 1,
"avg": 16,
"max": 16
},
"BitcoinP2PWorker::setupListeners": {
"time": 0,
"count": 2,
"avg": 0,
"max": 0
},
"EventService::wireup": {
"time": 21,
"count": 1,
"avg": 21,
"max": 21
},
"ApiService::start": {
"time": 1,
"count": 1,
"avg": 1,
"max": 1
},
"SocketService::start": {
"time": 17,
"count": 1,
"avg": 17,
"max": 17
},
"SocketService::wireup": {
"time": 1,
"count": 1,
"avg": 1,
"max": 1
},
"BitcoinP2PWorker::connect": {
"time": 136,
"count": 1,
"avg": 136,
"max": 136
},
"BitcoinP2PWorker::start": {
"time": 137,
"count": 1,
"avg": 137,
"max": 137
},
"InternalStateProvider::getLocalTip": {
"time": 4,
"count": 3,
"avg": 1.3333333333333333,
"max": 2
},
"InternalStateProvider::getLocatorHashes": {
"time": 56,
"count": 2,
"avg": 28,
"max": 54
},
"BitcoinP2PWorker::getHeaders": {
"time": 286,
"count": 2,
"avg": 143,
"max": 251
},
"BitcoinP2PWorker::isCachedInv": {
"time": 167,
"count": 3735,
"avg": 0.044712182061579654,
"max": 1
},
"BitcoinP2PWorker::cacheInv": {
"time": 100,
"count": 5725,
"avg": 0.017467248908296942,
"max": 1
},
"BitcoinP2PWorker::getBlock": {
"time": 126153,
"count": 2566,
"avg": 49.163289166017144,
"max": 1488
},
"BitcoinBlock::handleReorg": {
"time": 2742,
"count": 2566,
"avg": 1.06858924395947,
"max": 59
},
"BitcoinBlock::getBlockOp": {
"time": 2330,
"count": 2566,
"avg": 0.9080280592361653,
"max": 74
},
"TransactionModel::tagMintBatch": {
"time": 5269,
"count": 3149,
"avg": 1.6732295966973643,
"max": 60
},
"TransactionModel::streamMintOps": {
"time": 8502,
"count": 3149,
"avg": 2.699904731660845,
"max": 69
},
"TransactionModel::streamSpendOps": {
"time": 172,
"count": 3149,
"avg": 0.05462051444903144,
"max": 3
},
"TransactionModel::streamTxOps": {
"time": 4049,
"count": 3149,
"avg": 1.2858050174658622,
"max": 66
},
"TransactionModel::batchImport": {
"time": 24485,
"count": 3149,
"avg": 7.775484280724039,
"max": 93
},
"BitcoinBlock::processBlock": {
"time": 21858,
"count": 2566,
"avg": 8.518316445830086,
"max": 95
},
"BitcoinBlock::addBlock": {
"time": 24731,
"count": 2566,
"avg": 9.637957911145753,
"max": 97
},
"BitcoinP2PWorker::processBlock": {
"time": 24818,
"count": 2566,
"avg": 9.67186282151208,
"max": 97
},
"TransactionModel::pruneMempool": {
"time": 1970,
"count": 591,
"avg": 3.3333333333333335,
"max": 66
},
"BitcoinP2PWorker::processTransaction": {
"time": 11992,
"count": 583,
"avg": 20.569468267581474,
"max": 93
},
"TransactionModel::findAllRelatedOutputs": {
"time": 5,
"count": 1,
"avg": 5,
"max": 5
}
}
The above, was from a BTC node I was hammering with RPC calls constantly.
The following is from a BTC node that is only servicing bitcore-node p2p requests, for syncing
Notice the difference for BitcoinP2PWorker::getBlock
We want to see the average for that section be very low.
{
"BitcoinP2PWorker::getHeaders": {
"time": 28,
"count": 2,
"avg": 14,
"max": 19
},
"BitcoinP2PWorker::isCachedInv": {
"time": 80,
"count": 2533,
"avg": 0.03158310303987367,
"max": 1
},
"BitcoinP2PWorker::cacheInv": {
"time": 58,
"count": 4896,
"avg": 0.01184640522875817,
"max": 1
},
"BitcoinP2PWorker::getBlock": {
"time": 2280,
"count": 2385,
"avg": 0.9559748427672956,
"max": 53
},
"BitcoinBlock::handleReorg": {
"time": 1391,
"count": 2385,
"avg": 0.5832285115303983,
"max": 6
},
"BitcoinBlock::getBlockOp": {
"time": 1336,
"count": 2385,
"avg": 0.560167714884696,
"max": 8
},
"TransactionModel::tagMintBatch": {
"time": 1900,
"count": 2459,
"avg": 0.7726718178121188,
"max": 60
},
"TransactionModel::streamMintOps": {
"time": 4360,
"count": 2459,
"avg": 1.7730784871899146,
"max": 67
},
"TransactionModel::streamSpendOps": {
"time": 118,
"count": 2459,
"avg": 0.04798698657991053,
"max": 5
},
"TransactionModel::streamTxOps": {
"time": 1890,
"count": 2459,
"avg": 0.7686051240341603,
"max": 8
},
"TransactionModel::batchImport": {
"time": 10549,
"count": 2459,
"avg": 4.289955266368443,
"max": 70
},
"BitcoinBlock::processBlock": {
"time": 15448,
"count": 2385,
"avg": 6.4771488469601675,
"max": 72
},
"BitcoinBlock::addBlock": {
"time": 16924,
"count": 2385,
"avg": 7.096016771488469,
"max": 73
},
"BitcoinP2PWorker::processBlock": {
"time": 17005,
"count": 2385,
"avg": 7.129979035639413,
"max": 73
},
"TransactionModel::pruneMempool": {
"time": 236,
"count": 119,
"avg": 1.9831932773109244,
"max": 8
},
"BitcoinP2PWorker::processTransaction": {
"time": 1248,
"count": 74,
"avg": 16.864864864864863,
"max": 37
}
}
@micahriggan not running btc but a btg and i dont have this file
There's still a P2P file somewhere, connecting to btg. Basically just need to use the LoggifyClass decorator on whatever p2p class, so we can see the timing on getting the blocks from the node, and the timing on the database operations
yeah i just added it, it stopped syncing at all, api call was {}
Did it have a compiler error or something? I wouldn't have expected that to happen.
no, but unsure if this is related to the patch ... i rolled it back and its stil not syncing but mongo is doing a lot and nodejs too ... just no syncing messages
Hmm, you may have pulled in a new index or something? If so, there'd be mongodb logs about an index building.
yes maby, perhaps we can chat somewhere else to prevent this wiki filling up?
ill levve it running, but the way i started it i dont thing debug flag is set
@micahriggan required some more hacking in package.json, but got it working:
"BitcoinP2PWorker::getBlock":{ "time":843, "count":73, "avg":11.547945205479452, "max":28 },
if i add a single peer from localhost, its slower then when i add several nodes , but not every time :/ This localhost node is doing nothing, no port mapped, so its dedicated for bitcore ... so why would it go slower? looking at debug its collecting the same block from all the nodes ... so adding more doesnt make any sense
"BitcoinP2PWorker::getBlock":{ "time":5847, "count":23, "avg":254.2173913043478, "max":561 },
Also its taking ages at some startups before syncing, but i dont see any index messages in mongo.
If getting the blocks is a problem, why isnt the working fetching them like 5 ahead so there is work ready when its done? or at least the next one?
I've done an experiment with adding a pre-fetch for the blocks. It's definitely helpful, although the bigger blocks take up a good bit of memory for chains like BCH. But also I didn't realize it could take 250ms. I usually see 3ms or something like that.
That seems like it's likely a major factor for the sync speed for you. Do you see any other large avg times on there?
@BoGnY if you get 5blocks/min at 320k then your hardware is the main issue. i am at 380k on my modified test system at getting 30+ blocks/min ... and on the server that is tuned and at 410k still getting 11,37 blocks/min. If you are talking about your provider, keep in mind that SSD storage is always overbooked and used by multiple people. You cannot compare this with dedicated hardware. Having a LOT of dedicatd SSD's with good hardware raid helps a little ... but the software is ratelimited somewhere.
@kruisdraad i don't mean 5 blocks/min, but 1 block every 5 minutes... so is 0.2 blocks/min... on my system, the problem isn't ssd performance, but mongodb instance that use 80/90% of cpu power... i upgraded the machine by adding 4 cores.. it's just that i'm a student doing research, i can't afford hundreds of dollars every month for enterprise/dedicated hardware 😢 😭
@micahriggan i wanted to leave it running for a day to get better averages, but running it with debug mode makes it unstable and ends with Segmentation fault (core dumped) after a few hours. The full stats from yesterday:
{ "CoinModel::onConnect":{ "time":2, "count":1, "avg":2, "max":2 }, "StorageService::start":{ "time":5039, "count":1, "avg":5039, "max":5039 }, "EventService::start":{ "time":3, "count":1, "avg":3, "max":3 }, "BitcoinP2PWorker::setupListeners":{ "time":0, "count":1, "avg":0, "max":0 }, "EventService::wireup":{ "time":11, "count":1, "avg":11, "max":11 }, "ApiService::start":{ "time":0, "count":1, "avg":0, "max":0 }, "SocketService::start":{ "time":11, "count":1, "avg":11, "max":11 }, "SocketService::wireup":{ "time":0, "count":1, "avg":0, "max":0 }, "BitcoinP2PWorker::connect":{ "time":19, "count":1, "avg":19, "max":19 }, "BitcoinP2PWorker::start":{ "time":20, "count":1, "avg":20, "max":20 }, "InternalStateProvider::getLocalTip":{ "time":0, "count":2, "avg":0, "max":0 }, "InternalStateProvider::getLocatorHashes":{ "time":22, "count":1, "avg":22, "max":22 }, "BitcoinP2PWorker::getHeaders":{ "time":24, "count":1, "avg":24, "max":24 }, "BitcoinP2PWorker::isCachedInv":{ "time":1, "count":74, "avg":0.013513513513513514, "max":1 }, "BitcoinP2PWorker::cacheInv":{ "time":0, "count":72, "avg":0, "max":0 }, "BitcoinP2PWorker::getBlock":{ "time":803, "count":70, "avg":11.471428571428572, "max":49 }, "BitcoinBlock::handleReorg":{ "time":114, "count":70, "avg":1.6285714285714286, "max":25 }, "BitcoinBlock::getBlockOp":{ "time":1115, "count":70, "avg":15.928571428571429, "max":126 }, "TransactionModel::tagMintBatch":{ "time":68, "count":72, "avg":0.9444444444444444, "max":54 }, "TransactionModel::streamMintOps":{ "time":7601, "count":72, "avg":105.56944444444444, "max":464 }, "TransactionModel::streamSpendOps":{ "time":932, "count":72, "avg":12.944444444444445, "max":40 }, "TransactionModel::pruneMempool":{ "time":9, "count":70, "avg":0.12857142857142856, "max":6 }, "TransactionModel::streamTxOps":{ "time":5768, "count":71, "avg":81.2394366197183, "max":460 }, "TransactionModel::batchImport":{ "time":115909, "count":71, "avg":1632.5211267605634, "max":4834 }, "BitcoinBlock::processBlock":{ "time":116313, "count":69, "avg":1685.695652173913, "max":4867 }, "BitcoinBlock::addBlock":{ "time":116431, "count":69, "avg":1687.4057971014493, "max":4868 }, "BitcoinP2PWorker::processBlock":{ "time":116431, "count":69, "avg":1687.4057971014493, "max":4868 }, "BitcoinP2PWorker::processTransaction":{ "time":3, "count":2, "avg":1.5, "max":2 } }
The old explorer used 32-48GB of RAM, so most people expect a lot of RAM usage. Currently a system performs the same with 8GB even though i have allocated 32GB in the production VM, so there is more then enough room to do prefetching. And you dont need to prefetch many, so lets say 5 blocks ahead, that should be more then enough? Or make this a variable so people can tune this on their side (or disable it)
@BoGnY well in most cases i can get hardware for cheaper then VM's which is not that hard, then again having my own research infrastructure helps. As said, i (and @micahriggan as well) are thinking of building premade databases up to a recent point in time, that with some tuning would definatly help everyone.
@micahriggan any chance in getting this prefetch patch ? :)
@micahriggan when you restart the node, it doesnt start to sync in a lot of cases, this is worse when the sync progresses over time and the database is later. I just restarted my testbox (had to) and it doesnt want to sync. Its busy doing 'something' but not logging anything about it (even in debug). When stracing it it shows a lot of :
write(25, "\311\0\0\0i$\0\0\0\0\0\0\324\7\0\0\0\0\0\0bitcore.$cmd"..., 41) = 41
write(25, "\240\0\0\0\2find\0\7\0\0\0events\0\3filter\0001\0\0"..., 160) = 160
write(31, "\312\0\0\0j$\0\0\0\0\0\0\324\7\0\0\0\0\0\0bitcore.$cmd"..., 41) = 41
write(31, "\241\0\0\0\2find\0\7\0\0\0events\0\3filter\0002\0\0"..., 161) = 161
write(28, "\307\0\0\0k$\0\0\0\0\0\0\324\7\0\0\0\0\0\0bitcore.$cmd"..., 41) = 41
write(28, "\236\0\0\0\2find\0\7\0\0\0events\0\3filter\0/\0\0"..., 158) = 158
So its doing smomething with events and filters? but why? and why is this so extremely slow?