routinator
routinator copied to clipboard
routinator proces fails shortly after start processing ripe.tal
Hi,
We are running routinator for over a year now. But recently our routinator process fails shortly after it starts processing ripe.tal This problem was first observed on Sep 7 13:08:44
Sep 7 13:08:44 ops-rpki02-p routinator[31624]: Fatal: failed to open file /var/lib/routinator/rpki-cache/rrdp/rrdp.ripe.net/21d6592469dbe79feb2922562764fd193170f173229298b9a4443ffb5c282000/tmp/rpki.ripe.net/repository/DEFAULT/40/b796f4-2e88-4eaa-a269-2738bcb43d6d/1/nPHiDOX4JDXiM7wDZ8Fh2D9y37w.roa: No space left on device (os error 28)
Our disks are NOT full (4,5Gb free space left on /var)
We run Routinator 0.11.2 on RH7
I think this is kind of the same problem: https://github.com/NLnetLabs/routinator/issues/657
Our first fix was to remove ripe.tal file from the /var/lib/routinator/tals directory --> Routinator starts and runs just fine. The better fix to move the ripe.tal file is back into /var/lib/routinator/tals directory and putting an extra line in /etc/security/limits.conf
routinator hard nofile <some high number>
Is this a problem in the routinator software? Or is the ripe information different than before?
Kind regards Johan
The issue might be running out of inodes on the disk, not space. I believe that also results in this error message.
Yes, routinator is quite hungry for inodes
sudo du -sh --inodes /var/lib/routinator/
442K /var/lib/routinator/
They get consumed quickly during startup and released upon a crash, making it harder to pinpoint the problem. I doubled the space/inodes and that solved it. Thx for pointing in the right direction.
I’d love to make it need less inodes but I am all out of ideas that don’t turn into basically writing an in-file mini database.
As a workaround improvement, I’ve created #785 to always provide a hint about the lack of inodes.
The system requirements in the docs now mention this as well.
With both documented system requirements updated and #793, I think it is okay to close this issue for now.