ReimuHakurei
ReimuHakurei
Both pools are nowhere near full. SMART data on all disks is fine. If the problem was caused by the disks, I would expect to see high iowait. However, there...
Alright, I'll give that a try. I see #7038 was referenced there, and it looks to me like this may be the same issue as that.
I've updated the system to 0.8.2. I think I've pinpointed the thing that's triggering the problem (snapfuse in an LXC container spinning, doing lots of I/O, sucking up an entire...
Alright. Prior to updating to 0.8.2, it took 12-24 hours for this issue to crop up, and it's been 2 days with no sign of it, so I'm inclined to...
Actually, yep, looks like last night it occurred again. Based on my sample size of 1, 0.8.2 seems to have improved it, since it took 4 days to occur rather...
Might it be worth comparing some system configurations? ``` pool: rpool state: ONLINE scan: scrub repaired 0B in 0 days 00:02:31 with 0 errors on Mon Oct 7 22:35:32 2019...
`snapfuse` associated with that snap package (Nextcloud) has been sucking up an entire core and doing the constant I/O since an auto-update of the snap package in question occurred. The...
snap packages are distributed as squashfs images. Normally, those are just mounted using the native kernel support as loopback devices, but in my case, I have snap running inside of...
It looks to me like the only things actually hanging here are `sync` commands. Is it the same for you? I think this is the cause of my load averages...
I'm going to replace `sync` with a copy of `true` on the container in question to see if this fixes my load average issues and makes this more manageable.