arm-nas
arm-nas copied to clipboard
Arm NAS configuration with ZFS.
Arm NAS
Ansible playbook to configure my Arm NASes:
- HL15 with Ampere Altra
- Raspberry Pi 5 SATA NAS
Hardware
Primary NAS - 45Drives HL15

The current iteration of the HL15 I'm running contains the following hardware:
- (Motherboard) ASRock Rack ALTRAD8UD-1L2T (specs)
- (Case) 45Homelab HL15 + backplane + PSU
- (PSU) Corsair RM750e
- (RAM) 8x Samsung 16GB 1Rx4 ECC RDIMM M393A2K40DB3-CWE PC25600
- (NVMe) Kioxia XG8 2TB NVMe SSD
- (CPU) Ampere Altra Q32-17
- (SSDs) 4x Samsung 8TB 870 QVO 2.5" SATA
- (HDDs) 6x Seagate EXOS 20TB SATA HDD
- (HBA) Broadcom MegaRAID 9405W-16i
- (Cooler) Noctua NH-D9 AMP-4926 4U
- (Case Fans) 6x Noctua NF-A12x25 PWM
- (Fan Hub) Noctua NA-FH1 8 channel Fan Hub
Some of the above links are affiliate links. I have a series of videos showing how I put this system together:
- Part 1: How efficient can I build the 100% Arm NAS?
- Part 2: Silencing the 100% Arm NAS—while making it FASTER?
Secondary NAS - Raspberry Pi 5 with SATA HAT

The current iteration of the Raspberry Pi 5 SATA NAS I'm running contains the following hardware:
- (SBC) Raspberry Pi 5
- (HAT) Radxa Penta SATA HAT for Pi 5
- (SSDs) Samsung 870 QVO 8TB SATA SSD
- (microSD) Kingston Industrial 16GB A1
- (Network) Plugable 2.5GB USB Ethernet Adapter
- (Power) TMEZON 12V 5A AC adapter
Some of the above links are affiliate links. I have a series of videos showing how I put this system together:
- Part 1: The ULTIMATE Raspberry Pi 5 NAS
- Part 2: Big NAS, Lil NAS
Preparing the hardware
The HL15 should not require any special prep, besides having Ubuntu installed. The Raspberry Pi 5 is running Debian (Pi OS) and needs its PCIe connection enabled. To do that:
-
Edit the boot config:
sudo nano /boot/firmware/config.txt -
Add in the following config at the bottom and save the file:
dtparam=pciex1 dtparam=pciex1_gen=3 -
Reboot
Confirm the SATA drives are recognized with lsblk.
Running the playbook
Ensure you have Ansible installed, and can SSH into the NAS using ssh user@nas-ip-or-address without entering a password, then run:
ansible-playbook main.yml
Accessing Samba Shares
After the playbook runs, you should be able to access Samba shares, for example the hddpool/jupiter share, by connecting to the server at the path:
smb://nas01.mmoffice.net/hddpool_jupiter
Until issue #2 is resolved, there is one manual step required to add a password for the jgeerling user (one time). Log into the server via SSH, run the following command, and enter a password when prompted:
sudo smbpasswd -a jgeerling
The same thing goes for the Pi, if you want to access it's ZFS volume.
Replication / Backups
Backups of the primary NAS (nas01) to the secondary NAS (nas02) are handled using Sanoid (and it's included syncoid replication tool).
Sanoid is configured on nas01 to store a set of monthly, daily, and hourly snapshots. Syncoid is run on cron on nas02 to pull snapshots nightly.
Sanoid should prune snapshots on nas01, and Syncoid on nas02.
You can check on snapshot health with:
- nas01:
sudo sanoid --monitor-snapshots && zfs list -t snapshot - nas02:
zfs list -t snapshot
For example:
jgeerling@nas01:~$ sudo sanoid --monitor-snapshots
OK: all monitored datasets (hddpool/jupiter) have fresh snapshots
Benchmarks
There's a disk benchmarking script included, which allows me to test various performance scenarios on the server.
You can run it by copying it to the server, making it executable, and running it with sudo:
chmod +x disk-benchmark.sh
sudo ./disk-benchmark.sh
License
GPLv3 or later
Author
Jeff Geerling