Device file paths change between reboots
Hi,
I have 4 nvme disks on my ubuntu 24.04 server. After every reboot the nvme disks have another identifier. So /dev/nvme1n1p1 can be /dev/nvme0n1p1 .
For the disk which hosts the / mount everything works fine. But the 3 data diks have every reboot another ID (apparently an known issue in ubuntu).
In Beszel the graphs are coupled to the name of the disk, so after a reboot the graphs kinda work after an time but history is wrong. My mounts are mapped by uuid.
Is there an soulution for this?
Filesystem Size Used Avail Use% Mounted on /dev/mapper/ubuntu--vg-ubuntu--lv 914G 33G 843G 4% / /dev/nvme1n1p1 916G 633G 237G 73% /mnt/media /dev/nvme2n1p1 916G 108G 763G 13% /mnt/files /dev/nvme3n1p1 1.8T 1.2T 605G 66% /mnt/photos /dev/nvme0n1p2 2.0G 96M 1.7G 6% /boot /dev/nvme0n1p1 1.1G 6.2M 1.1G 1% /boot/efi
Good point, I hadn't considered that.
We can use the UUID, but it would not be ideal to display something like e858d363-aaa8-4a5e-9da1-fbe0ef63af1b Usage as the chart title.
Maybe we can use the label as the identifier if it's defined?
lsblk -o NAME,LABEL,MOUNTPOINT
Labels won't stick for now on 2 drives. I will troubleshoot...
Or use the name for docker in the volume after extra-filesystems? volumes:
- /mnt/disk1/.beszel:/extra-filesystems/sdb1:ro
NAME LABEL MOUNTPOINT nvme3n1 └─nvme3n1p1 /mnt/photos nvme2n1 └─nvme2n1p1 files /mnt/files nvme1n1 └─nvme1n1p1 /mnt/media nvme0n1 ├─nvme0n1p1 /boot/efi ├─nvme0n1p2 /boot └─nvme0n1p3 └─ubuntu--vg-ubuntu--lv /
Btw, with extra disks the title and explanation is displayed. I get why it is there but it kind off says the same :-)
nvme1n1p1 Usage Disk usage of nvme1n1p1
Copy/pasting my comment from #350:
I think the best option is to use label values straight from the disk if available. Maybe add a manual env var mapping if you don't want to use disk labels for whatever reason.
If you have time, can you run the agent with LOG_LEVEL=debug and check the line containing DEBUG Disk I/O diskstats that's logged on startup?
It should show info for your disks including labels. I just want to make sure those label values come through properly.
I just ran the agent with debug logs and it looks like the label is empty for all of my drives, whereas the lsblk command above is showing the correct labels on the disks. Is this expected?
@codingmatty Thank you, fellow Chattanoogan. Small world. Looks like labels are not coming through on the Docker agent. I'll investigate further.
Hey, any updates on this? My issue is pretty similar. I mainly have two drives that like to interchange between sda and sdb on reboot.
That part is mostly remediated by using the mount location, but I also need to apply the "specify where to read IO" bit, which means I need to reference sda and sdb, and (have yet to test it) that will probably mess the stats up due to the disconnect between the drives and the sd# entries.
Edit: It did, in fact, not survive a reboot. The stats for the IO of the mounts have indeed swapped.
Would also love to see something more stable here. Disk labels might be nice aswell.
e.g. /mnt/SOMEDISKLABEL:/extra-filesystems/SOMEDISKLABEL:ro, would remove (atleast my) requirement to add a __SomeName thing in addition to the actual disk identifier.