graph-node icon indicating copy to clipboard operation
graph-node copied to clipboard

Tuning in graph node for speeding up subgraph's synchronization

Open Jacob273 opened this issue 3 years ago • 2 comments

What is the expected behavior?

I'd like to speed up subgraphs synchronization process.

What is the current behavior?

At the moment I am syncing two subgragraphs and they're syncing slowly:

  • uniswapV3
  • uniswapV2

Both are syncing with around 50 blocks per minute. So it is around 3 000 blocks per hour and that gives 72 000~ blocks per day. (I've measured that by querying transaction table => got the latest transaction row and its block_number , waited 60 seconds and again => retrieved latest transaction row and its block_number)

I have approx:

  • 4 703 996 blocks to sync for uniswapV2 left (ETA 65 days?)
  • 2 671 345 blocks to sync for uniswapV3 left (ETA 37 days?)

I have dedicated server which runs on SSD and alot of ram available (30GB unused ram~). I'm using ERIGON client as local ethereum node via json-rpc on 8545 and graph-node:v0.25.2. Both postgres and graph-node are hosted within same docker container via docker-compose.

Are there any possibilities to speed up the synchronization process?

So far I've tried to increase ETHEREUM_BLOCK_BATCH_SIZE from (defaults 10) to 500 but that did not speed anything up. I've also seen that by default there's GRAPH_LOG: info in docker-compose, which I assume may be switched off (at least until subgraphs will get fully sync)?

I've observed that graph-node consumes just a a little of RAM - around 300 MB~.

Jacob273 avatar Jul 21 '22 09:07 Jacob273

I've tried following (can't see the difference).

  GRAPH_LOG: error
  ETHEREUM_RPC_MAX_PARALLEL_REQUESTS: 256
  ETHEREUM_BLOCK_BATCH_SIZE: 500
  ETHEREUM_POLLING_INTERVAL: 100
  ETHEREUM_TRACE_STREAM_STEP_SIZE: 500
  GRAPH_ETHEREUM_TARGET_TRIGGERS_PER_BLOCK_RANGE: 1000
  GRAPH_ETHEREUM_MAX_BLOCK_RANGE_SIZE: 8000
  GRAPH_ETHEREUM_MAX_EVENT_ONLY_RANGE: 5000

Is there any article regarding speeding up the sync process so that server is pushed to the limits?

Jacob273 avatar Jul 21 '22 11:07 Jacob273

Also interested

oliver-g-alexander avatar Aug 25 '22 04:08 oliver-g-alexander

HI, have you find some ways to speed up the graph node, I have the same problem with you , Thanks

wangqiang-h avatar Oct 01 '22 06:10 wangqiang-h

Also interested

rdonmez avatar Nov 17 '22 14:11 rdonmez

I haven't found any other solution to speed up the process except for the way that consits of placing more erigon/geth nodes and making each geth-node point to each (e.g uniswapV2 pointing to erigon1, uniswapV3 pointing to erigon2)

Jacob273 avatar Nov 21 '22 10:11 Jacob273

hey @Jacob273 are you still using 0.25.2, or the more recent 0.28.2? (there are some enhancements in more recent versions)

azf20 avatar Nov 21 '22 11:11 azf20

Yes I've seen several enhacements in release notes e.g in 0.27, e.g:

  • Store writes are now carried out in parallel to the rest of the subgraph process
  • GRAPH_STORE_WRITE_QUEUE env variable.

but I couldn't observe any block/s improvement after updating from 0.25.2 to 0.27 (but that may be due to improper configuration).

In our case some of the subgraphs are running on graph-node 0.25.2 and some on 0.27 (not really remember why some are still runnin gon 0.25.2).

In our case satisfying speed we've reached was:

  • Sushiswap: [1.9-2.5] blocks per second.
  • UniswapV2: [1-1.5] blocks per second.
  • UniswapV3: [1.2-1.8] blocks per second.

Jacob273 avatar Nov 21 '22 12:11 Jacob273

Also interested

tw7613781 avatar Jan 14 '23 16:01 tw7613781