bsc icon indicating copy to clipboard operation
bsc copied to clipboard

Suggestions for saving disk space for your full node

Open panalyticsBsc opened this issue 4 years ago • 1 comments

It is becoming more and more difficult to run your own full node due to an ever increasing demand on system resources. Especially if it is out of scope to run several full nodes and maintain and update them almost daily. Here I want to share with you our setup and tips for running a full node, without delay's or prune stops.

Hardware

Really important, if you do not have the proper hardware you will never be able to sync. You need atleast:

Storage: 2TB+ NVME SSD PCIe v3.0+ 8k IOPS, 250MB/S Memory: 64GB+ Cores: 8+ Internet: 100/100 Mbs

OS + binaries

The BSC releases binaries for Linux, Mac and Windows. You can chose an OS of your liking, we prefer a Linux based OS and for a guide on setting it up on Linux you can check: Run full node with geth

Independent of the OS you prefer make sure that you are always running the latest (stable) version, currently v1.1.7.

Performance tuning

--diffsync

I assume most of you are aware with the diffsync protocol. It has be rolled out during release v1.1.5 and it improves syncing speed by 60%-70%. It can be enabled by adding --diffsync in the staring command

--datadir.ancient

At this moment geth/chaindata/ancient consumes almost 600 GB, which does not need to be on our precious SSD. If geth is running stop it and move this dir to another (preferably) slower disk. Once the ancient data is moved start geth with your normal command and add --datadir.anciet DATA_DIR_ANCIENT.

--txlookuplimit & debug.chaindbCompact()

txlookuplimit = Number of recent blocks to maintain transactions index for I have seen a look of start commands where the txlookuplimit is set to 0. When I ask why that is the main response I get is, well that is what everyone seems to be using. Ask yourself, how often do I query for older blocks? If the answer is never (or close to it) consider setting this to a value that better matches your use case. For example: if new transactions are only relevant for you consider setting it at 50.000. You will have blocks of up to 2 days old indexed and the older ones are removed.

After setting the txlookuplimit to some value I recommend running the debug.chaindbCompact() command in your geth interface (geth attach geth.ipc) to compact the chaindb. In our case we saved about 50GB with this.

Ending note

At this moment 23-12-2021 our BSC takes: ~ 1 TB mainnet ~ 600 GB ancient

and has been running stable for almost a month without stopping to prune.

Happy holidays!

panalyticsBsc avatar Dec 23 '21 07:12 panalyticsBsc

Brilliant insights and great suggestions. @panalyticsBsc Thanks a lot for your sharing. It will benefit others who also running full nodes.

forcodedancing avatar Dec 24 '21 02:12 forcodedancing