Natalie Klestrup Röijezon
Natalie Klestrup Röijezon
Stopping my old k3d cluster that was running in the background seems to have resolved the issue, and new clusters create fine again. Curiously, `kind` also failed to create any...
> So the error occurred with a cluster that you created before the update and does not occur with new clusters you created after the update? (that would fit the...
``` $ ulimit -a real-time non-blocking time (microseconds, -R) unlimited core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f)...
lsof -a -p (k3s server) ``` COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME k3s 3476174 root cwd DIR 259,5 4096 7874620 /var/lib/rancher/k3s/server k3s 3476174 root rtd DIR 0,88...
Restarting an arbitrary agent seems to have fixed the other cluster, supporting the resource exhaustion hypothesis. Afterwards, however, I tried restarting the whole primary cluster, and got weird resource exhaustion...
There's also plenty of both space and inodes available on disk: ``` $ df -h /var Filesystem Size Used Avail Use% Mounted on /dev/nvme0n1p3 914G 658G 215G 76% / $...
Looks like some mount shenanigans were afoot, `mount -l | wc -l` returned around 30k entries. Unmounting all of them (by running `cat /proc/mounts | awk '{ print $2 }'...
This doesn't seem to be mount-related, after all. Getting the same issue now, despite `mount | wc -l` being stable at 69 entries after having deleted and recreated my main...
The project has moved to GitLab: https://gitlab.com/teozkr/Sbtix/merge_requests/39
Ah, hadn't seen that one. However, that also looks like it prefers to drop messages rather than apply backpressure (https://docs.rs/async-broadcast/latest/async_broadcast/enum.RecvError.html#variant.Overflowed).