Grant Millar
Grant Millar
For anyone looking for a solution to this, it might be worth using https://github.com/ktsstudio/mirrors until a fix is pushed.
I'm currently testing disabling the OOM killer for the container, I'm not sure how qBittorrent will handle malloc failing, but I'll reply here with the result.
@CordySmith in docker cgroups v2 are set using the systemd driver such that they govern all processes within a systemd slice under one set of parameters. There is no kernel...
OOM kill was disabled and the kernel panic still persists: ``` [74173.465416] usercopy: Kernel memory exposure attempt detected from SLUB object 'zio_buf_comb_16384' (offset 15632, size 17136)! [74173.465516] ------------[ cut here...
Perhaps if IO is overloaded you could just discard the output, that would be better than a hard exit, for example: ```python def read_log_files(self, units): BUFSIZE = 8192 for unit...
Sorry, looking at this a bit further it seems that systemctl might be contributing to the high IO. I applied the above patch so that it's no longer exiting, but...
Thanks, here's the output: ``` root@ubtvnc:/# systemctl default-services appbox_dbus.service cron.service nginx.service panel.service php8.1-fpm.service snapd.aa-prompt-listener.service snapd.apparmor.service snapd.autoimport.service snapd.core-fixup.service snapd.recovery-chooser-trigger.service snapd.seeded.service snapd.service ssh.service vnstat.service cups.service docker.service fail2ban.service plymouth.service pulseaudio-enable-autospawn.service rsync.service unattended-upgrades.service uuidd.service...
Snapd doesn't work in docker as it requires real systemd, so we can add ```python igno_always = ["network*", "dbus*", "systemd-*", "kdump*", "kmod*", "snapd*"] ``` However some really big journal logs...
Setting the buffersize to 10MB resolves the issue: ``` def read_log_files(self, units): BUFSIZE=10485760 ``` Are logs ever rotated? I know on normal systemd systemd-journald rotates the journal files EDIT: Looks...
Thanks @gdraheim, so I could use `fallocate` to in essence delete the logs from the offset (0) to the length? I could setup a service to do this once per...