cardano-node
cardano-node copied to clipboard
[BUG] - High CPU/RAM/IO usage for cardano-node
External
Performance of running cardano-node
Summary High CPU, RAM and disk usage.
Steps to reproduce
- Run Cardano node locally.
- Observe a high CPU, RAM and disk usage.
Expected behavior Lower CPU (right now it looks like it uses 2 full cores), it should use around 33% of what it does today to be viable on lower end setup. Lower RAM usage, can it be done in under 8GB? Lower Disk usage.
System info (please complete the following information):
lshw -short
Device Class Description
=====================================
system PowerEdge T20 (PowerEdge T20)
bus 0VD5HY
memory 64KiB BIOS
processor Intel(R) Xeon(R) CPU E3-1225 v3 @ 3.20GHz
memory 256KiB L1 cache
memory 1MiB L2 cache
memory 8MiB L3 cache
memory 16GiB System Memory
memory 4GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns)
memory 4GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns)
memory 4GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns)
memory 4GiB DIMM DDR3 Synchronous 1600 MHz (0.6 ns)
bridge Xeon E3-1200 v3 Processor DRAM Controller
display Xeon E3-1200 v3 Processor Integrated Graphics Controller
bus 8 Series/C220 Series Chipset Family USB xHCI
uname -a
Linux dev-kepler 6.1.0-7-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.20-1 (2023-03-19) x86_64 GNU/Linux
cardano-cli --version
cardano-cli 8.0.0 - linux-x86_64 - ghc-8.10
git rev 69a117b7be3db0f4ce6d9fc5cd4c16a2a409dcb8
cardano-node --version
cardano-node 8.0.0 - linux-x86_64 - ghc-8.10
git rev 69a117b7be3db0f4ce6d9fc5cd4c16a2a409dcb8
Screenshots and attachments
Additional context The problem started around the Alonzo hard fork and lead to closing the node down for a while.
It looks like it's still synching... I will close if this is not an issue after it finishes.
Expected behavior Lower CPU (right now it looks like it uses 2 full cores), it should use around 33% of what it does today to be viable on lower end setup. Lower RAM usage, can it be done in under 8GB? Lower Disk usage.
Check release notes
Re: Expected behavior. I understand. Are there other alternatives? For example: Node being built with Rust?
The Minimum System Requirements are for production level deployments. You can change quite a lot of how memory is allocated by changing the RTS parameters, and that way reduce the RAM usage. For example by setting the allocation area size used by the GC, enable compacting collection, and disable the delayed OS memory return.
It's not something that is recommended for production level deployments (you will miss slots), but for syncing the chain it's perfectly fine.
Take a look at https://downloads.haskell.org/ghc/latest/docs/users_guide/runtime_control.html for more information.
Use preprod/preview for if it is not production, takes much less resources 🙂
This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 120 days.
Being able to set RTS options through the docker image would actually be great! This could allow configuration onto lower/higher tier hardware.