WSL stops responding and vmmem spikes CPU to 100%
Version
Microsoft Windows 10 Version 22H2 (OS Build 19045.2486)
WSL Version
- [X] WSL 2
- [ ] WSL 1
Kernel Version
5.10.16
Distro Version
Ubuntu 20.04
Other Software
No response
Repro Steps
- Launch WSL
- Use it normally (mostly ssh sessions, some git and grep operations, sometimes extracting Android images and copying around 1GB of files). (I believe this happens regardless of if I am working with large files, but it may be more frequent in that case. I have had cases where I am not running anything and it still locks up).
I recently upgraded to WSL 1.1.0 per another GitHub issue and this did not resolve the issue.
Expected Behavior
I expect WSL to respond and not use a high percentage of CPU resources. In the rare case where WSL acts up, wsl --shutdown should stop it.
Actual Behavior
I leave WSL running all the time based on my workload. For years, this worked flawlessly. For the last few months, after a day to a few days up and some suspends and resumes, WSL stops respnding, uses 80+% CPU, wsl --shutdown doesn't work, and I cannot get it to work unless I reboot my PC.
Diagnostic Logs
Thanks for reporting this @xboxfanj.
Let's try to figure out which process is taking those resources. Can you please:
- Update to WSL 1.1.2 via: wsl.exe --update --pre-release
- Reproduce the issue
- Open a debug shell via: wsl.exe --debug-shell
- Inside that shell, run:
ps aux --sort -pcpuand share the output of that command on this issue
Hi @OneBlue I did just reproduce the issue on 1.1.2, but unfortunately, wsl.exe --debug-shell is currently just freezing whether I run it from PowerShell or cmd. wsl --shutdown also does not do anything.
This time, I did not do any major file operations and was only running a few SSH sessions.
Somewhat related issue today. WSL is still responsive, but vmmem is using 7,828 MB of RAM. This time, I was extracting images, which is a multi-gigabyte operation. It's been several hours and it still didn't go down.
Earlier today, it was using a large amount of CPU, but that eventually cleared, while RAM is still being used even though it is not allocated in the second output here.
During image dumping: root@(none) [ ~ ]# ps aux --sort -pcpu USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND wslg 1365 81.5 5.5 452056 445576 ? R+ 15:59 0:33 git add . root 4464 0.5 0.0 0 0 ? I 15:49 0:03 [kworker/u16:3- root 17174 0.2 0.0 0 0 ? I 15:45 0:01 [kworker/u16:2- root 1370 0.1 0.0 8212 4820 hvc1 S 16:00 0:00 -bash root 9278 0.1 0.0 0 0 ? I 15:32 0:02 [kworker/u16:1- root 1 0.0 0.0 2300 1448 ? Sl Feb10 0:09 /init root 2 0.0 0.0 0 0 ? S Feb10 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? I< Feb10 0:00 [rcu_gp] root 4 0.0 0.0 0 0 ? I< Feb10 0:00 [rcu_par_gp] root 5 0.0 0.0 0 0 ? I< Feb10 0:00 [slub_flushwq] root 6 0.0 0.0 0 0 ? I< Feb10 0:00 [netns] root 8 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/0:0H-k root 10 0.0 0.0 0 0 ? I< Feb10 0:00 [mm_percpu_wq] root 11 0.0 0.0 0 0 ? S Feb10 0:00 [rcu_tasks_rude root 12 0.0 0.0 0 0 ? S Feb10 0:00 [rcu_tasks_trac root 13 0.0 0.0 0 0 ? S Feb10 0:00 [ksoftirqd/0] root 14 0.0 0.0 0 0 ? I Feb10 0:01 [rcu_sched] root 15 0.0 0.0 0 0 ? S Feb10 0:00 [migration/0] root 16 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/0] root 17 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/1] root 18 0.0 0.0 0 0 ? S Feb10 0:00 [migration/1] root 19 0.0 0.0 0 0 ? S Feb10 0:00 [ksoftirqd/1] root 20 0.0 0.0 0 0 ? I Feb10 0:00 [kworker/1:0-ev root 21 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/1:0H-e root 22 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/2] root 23 0.0 0.0 0 0 ? S Feb10 0:00 [migration/2] root 24 0.0 0.0 0 0 ? S Feb10 0:00 [ksoftirqd/2] root 26 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/2:0H-k root 27 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/3] root 28 0.0 0.0 0 0 ? S Feb10 0:00 [migration/3] root 29 0.0 0.0 0 0 ? S Feb10 0:00 [ksoftirqd/3] root 30 0.0 0.0 0 0 ? I Feb10 0:00 [kworker/3:0-ev root 31 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/3:0H-e root 32 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/4] root 33 0.0 0.0 0 0 ? S Feb10 0:00 [migration/4] root 34 0.0 0.0 0 0 ? S Feb10 0:00 [ksoftirqd/4] root 35 0.0 0.0 0 0 ? I Feb10 0:00 [kworker/4:0-ev root 36 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/4:0H-k root 37 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/5] root 38 0.0 0.0 0 0 ? S Feb10 0:00 [migration/5] root 39 0.0 0.0 0 0 ? S Feb10 0:00 [ksoftirqd/5] root 40 0.0 0.0 0 0 ? I Feb10 0:00 [kworker/5:0-ev root 41 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/5:0H-e root 42 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/6] root 43 0.0 0.0 0 0 ? S Feb10 0:00 [migration/6] root 44 0.0 0.0 0 0 ? S Feb10 0:00 [ksoftirqd/6] root 45 0.0 0.0 0 0 ? I Feb10 0:00 [kworker/6:0-ev root 46 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/6:0H-e root 47 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/7] root 48 0.0 0.0 0 0 ? S Feb10 0:00 [migration/7] root 49 0.0 0.0 0 0 ? S Feb10 0:01 [ksoftirqd/7] root 50 0.0 0.0 0 0 ? I Feb10 0:00 [kworker/7:0-ev root 51 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/7:0H-k root 59 0.0 0.0 0 0 ? S Feb10 0:00 [kdevtmpfs] root 60 0.0 0.0 0 0 ? I< Feb10 0:00 [inet_frag_wq] root 62 0.0 0.0 0 0 ? S Feb10 0:00 [oom_reaper] root 63 0.0 0.0 0 0 ? I< Feb10 0:00 [writeback] root 64 0.0 0.0 0 0 ? S Feb10 0:20 [kcompactd0] root 65 0.0 0.0 0 0 ? SN Feb10 0:00 [ksmd] root 66 0.0 0.0 0 0 ? SN Feb10 0:02 [khugepaged] root 72 0.0 0.0 0 0 ? I Feb10 0:01 [kworker/4:1-ev root 99 0.0 0.0 0 0 ? I< Feb10 0:00 [kblockd] root 100 0.0 0.0 0 0 ? I< Feb10 0:00 [blkcg_punt_bio root 101 0.0 0.0 0 0 ? I< Feb10 0:00 [md] root 102 0.0 0.0 0 0 ? I< Feb10 0:00 [hv_vmbus_con] root 103 0.0 0.0 0 0 ? I< Feb10 0:00 [hv_pri_chan] root 104 0.0 0.0 0 0 ? I< Feb10 0:00 [hv_sub_chan] root 105 0.0 0.0 0 0 ? I< Feb10 0:00 [rpciod] root 107 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/5:1H-k root 108 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/u17:0] root 109 0.0 0.0 0 0 ? I< Feb10 0:00 [xprtiod] root 113 0.0 0.0 0 0 ? I Feb10 0:02 [kworker/1:1-mm root 115 0.0 0.0 0 0 ? I Feb10 0:30 [kworker/6:1-ev root 116 0.0 0.0 0 0 ? S Feb10 0:10 [kswapd0] root 117 0.0 0.0 0 0 ? I< Feb10 0:00 [nfsiod] root 118 0.0 0.0 0 0 ? I< Feb10 0:00 [cifsiod] root 119 0.0 0.0 0 0 ? I< Feb10 0:00 [smb3decryptd] root 120 0.0 0.0 0 0 ? I< Feb10 0:00 [cifsfileinfopu root 121 0.0 0.0 0 0 ? I< Feb10 0:00 [cifsoplockd] root 122 0.0 0.0 0 0 ? I< Feb10 0:00 [deferredclose] root 123 0.0 0.0 0 0 ? I< Feb10 0:00 [xfsalloc] root 124 0.0 0.0 0 0 ? I< Feb10 0:00 [xfs_mru_cache] root 126 0.0 0.0 0 0 ? I< Feb10 0:00 [nfit] root 127 0.0 0.0 0 0 ? I Feb10 0:01 [kworker/2:1-mm root 128 0.0 0.0 0 0 ? S Feb10 0:00 [khvcd] root 129 0.0 0.0 0 0 ? I Feb10 0:00 [kworker/2:2-ev root 130 0.0 0.0 0 0 ? S Feb10 0:00 [scsi_eh_0] root 131 0.0 0.0 0 0 ? I< Feb10 0:00 [bond0] root 132 0.0 0.0 0 0 ? I< Feb10 0:00 [scsi_tmf_0] root 134 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/6:1H-k root 135 0.0 0.0 0 0 ? I< Feb10 0:00 [vfio-irqfd-cle root 136 0.0 0.0 0 0 ? I< Feb10 0:00 [usbip_event] root 137 0.0 0.0 0 0 ? I< Feb10 0:00 [raid5wq] root 138 0.0 0.0 0 0 ? I< Feb10 0:00 [dm_bufio_cache root 139 0.0 0.0 0 0 ? S Feb10 0:15 [hv_balloon] root 140 0.0 0.0 0 0 ? I< Feb10 0:00 [mld] root 141 0.0 0.0 0 0 ? I< Feb10 0:00 [ipv6_addrconf] root 142 0.0 0.0 0 0 ? I< Feb10 0:00 [ceph-msgr] root 143 0.0 0.0 0 0 ? I Feb10 0:01 [kworker/3:1-mm root 144 0.0 0.0 0 0 ? I Feb10 0:01 [kworker/7:1-mm root 145 0.0 0.0 0 0 ? I< Feb10 0:00 [ext4-rsv-conve root 147 0.0 0.0 10796 4748 hvc1 Ss Feb10 0:00 /bin/login -f root 150 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/1:1H-k root 151 0.0 0.0 0 0 ? I Feb10 0:02 [kworker/5:2-mm root 152 0.0 0.0 2144 1712 ? S Feb10 0:00 gns --socket 7 root 153 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/7:1H-k root 154 0.0 0.0 2564 0 ? Sl Feb10 0:35 localhost --por root 159 0.0 0.0 2284 1516 ? Sl Feb10 0:00 /init chrony 161 0.0 0.0 4300 1604 ? S Feb10 1:58 /sbin/chronyd root 163 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/4:1H-k root 165 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/0:1H-k root 166 0.0 0.0 0 0 ? S Feb10 0:00 [jbd2/sdc-8] root 167 0.0 0.0 0 0 ? I< Feb10 0:00 [ext4-rsv-conve root 169 0.0 0.0 2284 1560 ? Sl Feb10 0:00 /init root 173 0.0 0.0 121444 5620 ? Sl Feb10 0:00 /usr/bin/WSLGd root 178 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/2:1H-k root 179 0.0 0.0 2724 224 ? Sl Feb10 1:37 plan9 --control message+ 185 0.0 0.0 8624 2732 ? S Feb10 0:00 /usr/bin/dbus-d wslg 191 0.0 0.0 8492 132 ? Ss Feb10 0:00 /usr/bin/dbus-d root 192 0.0 0.0 2288 36 ? Ss Feb10 0:00 /init root 193 0.0 0.0 2304 44 ? S Feb10 0:01 /init wslg 194 0.0 0.0 10176 4980 ? Ss Feb10 0:00 -bash root 195 0.0 0.0 2288 32 ? Ss Feb10 0:00 /init root 196 0.0 0.0 2304 44 ? S Feb10 0:17 /init wslg 198 0.0 0.0 10176 4688 ? Ss Feb10 0:00 -bash root 200 0.0 0.0 2288 36 ? Ss Feb10 0:00 /init root 201 0.0 0.0 2304 44 ? S Feb10 0:01 /init wslg 203 0.0 0.0 10176 4920 ? Ss+ Feb10 0:01 -bash root 299 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/3:1H-k root 1286 0.0 0.0 0 0 ? I 15:56 0:00 [kworker/u16:0] root 1378 0.0 0.0 3628 2272 hvc1 R+ 16:00 0:00 ps aux --sort - root 9562 0.0 0.0 0 0 ? I 15:33 0:00 [kworker/0:2-ev wslg 27615 0.0 0.0 10176 3172 ? S+ Feb14 0:00 -bash wslg 27616 0.0 0.0 13536 6272 ? S+ Feb14 0:17 ssh [email protected] root 27699 0.0 0.0 0 0 ? I Feb15 0:02 [kworker/0:0-hv wslg 27731 0.0 0.9 752540 79240 ? Sl 02:34 0:01 /usr/bin/weston wslg 27733 0.0 0.1 234484 8500 ? Sl 02:34 0:00 /usr/bin/pulsea wslg 27737 0.0 0.0 8492 360 ? Ss 02:34 0:00 /usr/bin/dbus-d wslg 27740 0.0 0.1 45032 15552 ? Ss 02:34 0:00 /usr/bin/Xwayla wslg 27744 0.0 0.0 2160 1588 ? S 02:34 0:00 /init /mnt/c/Us root 32345 0.0 0.0 2288 48 ? Ss 03:09 0:00 /init root 32346 0.0 0.0 2304 52 ? S 03:09 0:00 /init wslg 32347 0.0 0.0 10176 5152 ? Ss 03:09 0:00 -bash wslg 32412 0.0 0.0 10176 3248 ? S+ 03:09 0:00 -bash wslg 32413 0.0 0.0 12280 6020 ? S+ 03:09 0:00 ssh [email protected]
Now: root@(none) [ ~ ]# ps aux --sort -pcpu USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 2300 1448 ? Sl Feb10 0:10 /init root 2 0.0 0.0 0 0 ? S Feb10 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? I< Feb10 0:00 [rcu_gp] root 4 0.0 0.0 0 0 ? I< Feb10 0:00 [rcu_par_gp] root 5 0.0 0.0 0 0 ? I< Feb10 0:00 [slub_flushwq] root 6 0.0 0.0 0 0 ? I< Feb10 0:00 [netns] root 8 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/0:0H-k root 10 0.0 0.0 0 0 ? I< Feb10 0:00 [mm_percpu_wq] root 11 0.0 0.0 0 0 ? S Feb10 0:00 [rcu_tasks_rude root 12 0.0 0.0 0 0 ? S Feb10 0:00 [rcu_tasks_trac root 13 0.0 0.0 0 0 ? S Feb10 0:00 [ksoftirqd/0] root 14 0.0 0.0 0 0 ? I Feb10 0:01 [rcu_sched] root 15 0.0 0.0 0 0 ? S Feb10 0:00 [migration/0] root 16 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/0] root 17 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/1] root 18 0.0 0.0 0 0 ? S Feb10 0:00 [migration/1] root 19 0.0 0.0 0 0 ? S Feb10 0:00 [ksoftirqd/1] root 20 0.0 0.0 0 0 ? I Feb10 0:00 [kworker/1:0-ev root 21 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/1:0H-e root 22 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/2] root 23 0.0 0.0 0 0 ? S Feb10 0:00 [migration/2] root 24 0.0 0.0 0 0 ? S Feb10 0:00 [ksoftirqd/2] root 26 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/2:0H-k root 27 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/3] root 28 0.0 0.0 0 0 ? S Feb10 0:00 [migration/3] root 29 0.0 0.0 0 0 ? S Feb10 0:00 [ksoftirqd/3] root 30 0.0 0.0 0 0 ? I Feb10 0:00 [kworker/3:0-ev root 31 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/3:0H-e root 32 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/4] root 33 0.0 0.0 0 0 ? S Feb10 0:00 [migration/4] root 34 0.0 0.0 0 0 ? S Feb10 0:00 [ksoftirqd/4] root 35 0.0 0.0 0 0 ? I Feb10 0:00 [kworker/4:0-ev root 36 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/4:0H-k root 37 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/5] root 38 0.0 0.0 0 0 ? S Feb10 0:00 [migration/5] root 39 0.0 0.0 0 0 ? S Feb10 0:00 [ksoftirqd/5] root 40 0.0 0.0 0 0 ? I Feb10 0:00 [kworker/5:0-ev root 41 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/5:0H-e root 42 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/6] root 43 0.0 0.0 0 0 ? S Feb10 0:00 [migration/6] root 44 0.0 0.0 0 0 ? S Feb10 0:00 [ksoftirqd/6] root 45 0.0 0.0 0 0 ? I Feb10 0:00 [kworker/6:0-ev root 46 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/6:0H-e root 47 0.0 0.0 0 0 ? S Feb10 0:00 [cpuhp/7] root 48 0.0 0.0 0 0 ? S Feb10 0:00 [migration/7] root 49 0.0 0.0 0 0 ? S Feb10 0:01 [ksoftirqd/7] root 50 0.0 0.0 0 0 ? I Feb10 0:00 [kworker/7:0-ev root 51 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/7:0H-k root 59 0.0 0.0 0 0 ? S Feb10 0:00 [kdevtmpfs] root 60 0.0 0.0 0 0 ? I< Feb10 0:00 [inet_frag_wq] root 62 0.0 0.0 0 0 ? S Feb10 0:00 [oom_reaper] root 63 0.0 0.0 0 0 ? I< Feb10 0:00 [writeback] root 64 0.0 0.0 0 0 ? S Feb10 0:23 [kcompactd0] root 65 0.0 0.0 0 0 ? SN Feb10 0:00 [ksmd] root 66 0.0 0.0 0 0 ? SN Feb10 0:03 [khugepaged] root 72 0.0 0.0 0 0 ? I Feb10 0:02 [kworker/4:1-ev root 99 0.0 0.0 0 0 ? I< Feb10 0:00 [kblockd] root 100 0.0 0.0 0 0 ? I< Feb10 0:00 [blkcg_punt_bio root 101 0.0 0.0 0 0 ? I< Feb10 0:00 [md] root 102 0.0 0.0 0 0 ? I< Feb10 0:00 [hv_vmbus_con] root 103 0.0 0.0 0 0 ? I< Feb10 0:00 [hv_pri_chan] root 104 0.0 0.0 0 0 ? I< Feb10 0:00 [hv_sub_chan] root 105 0.0 0.0 0 0 ? I< Feb10 0:00 [rpciod] root 107 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/5:1H-k root 108 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/u17:0] root 109 0.0 0.0 0 0 ? I< Feb10 0:00 [xprtiod] root 113 0.0 0.0 0 0 ? I Feb10 0:03 [kworker/1:1-ev root 115 0.0 0.0 0 0 ? I Feb10 0:34 [kworker/6:1-ev root 116 0.0 0.0 0 0 ? S Feb10 0:11 [kswapd0] root 117 0.0 0.0 0 0 ? I< Feb10 0:00 [nfsiod] root 118 0.0 0.0 0 0 ? I< Feb10 0:00 [cifsiod] root 119 0.0 0.0 0 0 ? I< Feb10 0:00 [smb3decryptd] root 120 0.0 0.0 0 0 ? I< Feb10 0:00 [cifsfileinfopu root 121 0.0 0.0 0 0 ? I< Feb10 0:00 [cifsoplockd] root 122 0.0 0.0 0 0 ? I< Feb10 0:00 [deferredclose] root 123 0.0 0.0 0 0 ? I< Feb10 0:00 [xfsalloc] root 124 0.0 0.0 0 0 ? I< Feb10 0:00 [xfs_mru_cache] root 126 0.0 0.0 0 0 ? I< Feb10 0:00 [nfit] root 127 0.0 0.0 0 0 ? I Feb10 0:02 [kworker/2:1-ev root 128 0.0 0.0 0 0 ? S Feb10 0:00 [khvcd] root 129 0.0 0.0 0 0 ? I Feb10 0:00 [kworker/2:2-ev root 130 0.0 0.0 0 0 ? S Feb10 0:00 [scsi_eh_0] root 131 0.0 0.0 0 0 ? I< Feb10 0:00 [bond0] root 132 0.0 0.0 0 0 ? I< Feb10 0:00 [scsi_tmf_0] root 134 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/6:1H-k root 135 0.0 0.0 0 0 ? I< Feb10 0:00 [vfio-irqfd-cle root 136 0.0 0.0 0 0 ? I< Feb10 0:00 [usbip_event] root 137 0.0 0.0 0 0 ? I< Feb10 0:00 [raid5wq] root 138 0.0 0.0 0 0 ? I< Feb10 0:00 [dm_bufio_cache root 139 0.0 0.0 0 0 ? S Feb10 0:16 [hv_balloon] root 140 0.0 0.0 0 0 ? I< Feb10 0:00 [mld] root 141 0.0 0.0 0 0 ? I< Feb10 0:00 [ipv6_addrconf] root 142 0.0 0.0 0 0 ? I< Feb10 0:00 [ceph-msgr] root 143 0.0 0.0 0 0 ? I Feb10 0:01 [kworker/3:1-mm root 144 0.0 0.0 0 0 ? I Feb10 0:01 [kworker/7:1-mm root 145 0.0 0.0 0 0 ? I< Feb10 0:00 [ext4-rsv-conve root 147 0.0 0.0 10796 2268 hvc1 Ss Feb10 0:00 /bin/login -f root 150 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/1:1H-k root 151 0.0 0.0 0 0 ? I Feb10 0:02 [kworker/5:2-ev root 152 0.0 0.0 2144 1712 ? S Feb10 0:00 gns --socket 7 root 153 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/7:1H-k root 154 0.0 0.0 2564 208 ? Sl Feb10 0:39 localhost --por root 159 0.0 0.0 2284 1508 ? Sl Feb10 0:00 /init chrony 161 0.0 0.0 4300 1316 ? S Feb10 2:09 /sbin/chronyd root 163 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/4:1H-k root 165 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/0:1H-k root 166 0.0 0.0 0 0 ? S Feb10 0:00 [jbd2/sdc-8] root 167 0.0 0.0 0 0 ? I< Feb10 0:00 [ext4-rsv-conve root 169 0.0 0.0 2284 1764 ? Sl Feb10 0:00 /init root 173 0.0 0.0 121444 1576 ? Sl Feb10 0:00 /usr/bin/WSLGd root 178 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/2:1H-k root 179 0.0 0.0 2716 68 ? Sl Feb10 1:37 plan9 --control message+ 185 0.0 0.0 8624 1404 ? S Feb10 0:00 /usr/bin/dbus-d wslg 191 0.0 0.0 8492 8 ? Ss Feb10 0:00 /usr/bin/dbus-d root 192 0.0 0.0 2288 4 ? Ss Feb10 0:00 /init root 193 0.0 0.0 2304 16 ? S Feb10 0:02 /init wslg 194 0.0 0.0 10176 3932 ? Ss Feb10 0:00 -bash root 195 0.0 0.0 2288 4 ? Ss Feb10 0:00 /init root 196 0.0 0.0 2304 12 ? S Feb10 0:20 /init wslg 198 0.0 0.0 10176 2940 ? Ss Feb10 0:00 -bash root 200 0.0 0.0 2288 4 ? Ss Feb10 0:00 /init root 201 0.0 0.0 2304 12 ? S Feb10 0:01 /init wslg 203 0.0 0.0 10308 4504 ? Ss+ Feb10 0:01 -bash root 299 0.0 0.0 0 0 ? I< Feb10 0:00 [kworker/3:1H-k root 331 0.0 0.0 7168 2064 ? S Feb16 0:00 dbus-launch --a root 332 0.0 0.0 7116 2504 ? Ss Feb16 0:00 /usr/bin/dbus-d root 513 0.0 0.0 0 0 ? I Feb16 0:00 [kworker/u16:1- root 1370 0.0 0.0 8212 2624 hvc1 S Feb16 0:00 -bash root 1386 0.0 0.0 3628 2300 hvc1 R+ 02:54 0:00 ps aux --sort - root 9562 0.0 0.0 0 0 ? I Feb16 0:01 [kworker/0:2-ev wslg 27615 0.0 0.0 10176 1612 ? S+ Feb14 0:00 -bash wslg 27616 0.0 0.0 14288 6624 ? S+ Feb14 0:23 ssh [email protected] wslg 27731 0.0 0.3 727964 31956 ? Sl Feb16 0:01 /usr/bin/weston wslg 27733 0.0 0.0 234484 1624 ? Sl Feb16 0:00 /usr/bin/pulsea wslg 27737 0.0 0.0 8492 4 ? Ss Feb16 0:00 /usr/bin/dbus-d wslg 27740 0.0 0.0 45476 4320 ? Ss Feb16 0:00 /usr/bin/Xwayla wslg 27744 0.0 0.0 2160 1508 ? S Feb16 0:00 /init /mnt/c/Us wslg 32340 0.0 0.0 10176 2468 ? S+ Feb16 0:00 -bash wslg 32341 0.0 0.0 12280 6316 ? S+ Feb16 0:00 ssh [email protected] root 32345 0.0 0.0 2288 8 ? Ss Feb16 0:00 /init root 32346 0.0 0.0 2304 4 ? S Feb16 0:00 /init wslg 32347 0.0 0.0 10176 4220 ? Ss+ Feb16 0:00 -bash root 32349 0.0 0.0 0 0 ? I Feb16 0:00 [kworker/0:0-hv root 32493 0.0 0.0 0 0 ? I Feb16 0:00 [kworker/u16:3- wslg 32562 0.0 0.0 7168 2100 ? S Feb16 0:00 dbus-launch --a wslg 32563 0.0 0.0 7116 2352 ? Ss Feb16 0:00 /usr/bin/dbus-d
I am experiencing the same symptoms as @xboxfanj on my work computer.
Running just a few angular watcher build tasks inside tmux in Ubuntu, wsl2 locks up usually at least once a day. wsl --shutdown hangs, to recover need to reboot Windows.
Reproduced both in Ubuntu 22.04 and 20.04.
$ wsl --version
WSL version: 1.0.3.0
Kernel version: 5.15.79.1
WSLg version: 1.0.47
MSRDC version: 1.2.3575
Direct3D version: 1.606.4
DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
Windows version: 10.0.19044.2364
As an additional data point, the problems started when I updated wsl2 and installed a second ubuntu instance. I did not save the information of what wsl2 version I had before the update.
Same high CPU issue today, I am not running much in WSL, but noticed my fans making a lot of noise, checked Task Manager, Vmmem is above 80% CPU. Debug Shell won't load again and WSL is unresponsive. I do agree with @jussih. There was an update I took a few months ago to wsl after being prompted in the console and that does seem to have caused this issue. I also do not know which version I was on before, but if there is history in the registry or a log, I can check.
I updated WSL to the pre-release version 1.1.3.0 and at least so far there have been no lockups. Carefully optimistic that this might have resolved the issue.
$ wsl --version
WSL version: 1.1.3.0
Kernel version: 5.15.90.1
WSLg version: 1.0.49
MSRDC version: 1.2.3770
Direct3D version: 1.608.2-61064218
DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
Windows version: 10.0.19044.2364
Got it again just now, but again I cannot launch the WSL debug shell to capture the processes @OneBlue
Happened again today, I did not even touch it this time, it was just open in the background from yesterday.
I just resumed my computer after rebooting. I heard my fans get loud. WSL is not running at all, but Vmmem is at 80% CPU. I still can't open the debug shell.
I have not experienced any lockups since updating to 1.1.3.0 around 2 weeks ago.
Today I experienced a full lockup for the first time with wsl 1.1.3.0. Upon entering the room after lunch break I discovered the idling computer blasting the fans at full speed. The lockup is identical to before - neither wsl --shutdown nor wsl --debug-shell do anything but hang indefinitely.
I am experiencing the same lockups, with all wsl commands hanging forever. It may be related to bringing my PC out of hibernate, but it's not an immediate effect if so. The only method I have found of killing wsl is to reboot.
I have experienced this on two different distros (Debian and openSUSE). I first noticed it when enabling systemd support, but have reproduced the issue with and without it since.
Had the lockup yesterday. Right now, Vmmem is at 23.7% CPU, fans are loud, but it is still responsive. This does not add up anywhere near 23.7% CPU.
root@(none) [ ~ ]# ps aux --sort -pcpu USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND wslg 219 1.8 0.2 52104 21936 ? Rs 00:34 15:53 /usr/bin/Xwayla root 1036 0.7 0.0 8212 4860 hvc1 S 14:42 0:00 -bash root 1 0.0 0.0 2312 1544 ? Sl 00:34 0:00 /init root 2 0.0 0.0 0 0 ? S 00:34 0:00 [kthreadd] root 3 0.0 0.0 0 0 ? I< 00:34 0:00 [rcu_gp] root 4 0.0 0.0 0 0 ? I< 00:34 0:00 [rcu_par_gp] root 5 0.0 0.0 0 0 ? I< 00:34 0:00 [slub_flushwq] root 6 0.0 0.0 0 0 ? I< 00:34 0:00 [netns] root 8 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/0:0H-e root 10 0.0 0.0 0 0 ? I< 00:34 0:00 [mm_percpu_wq] root 11 0.0 0.0 0 0 ? S 00:34 0:00 [rcu_tasks_rude root 12 0.0 0.0 0 0 ? S 00:34 0:00 [rcu_tasks_trac root 13 0.0 0.0 0 0 ? S 00:34 0:00 [ksoftirqd/0] root 14 0.0 0.0 0 0 ? I 00:34 0:00 [rcu_sched] root 15 0.0 0.0 0 0 ? S 00:34 0:00 [migration/0] root 16 0.0 0.0 0 0 ? S 00:34 0:00 [cpuhp/0] root 17 0.0 0.0 0 0 ? S 00:34 0:00 [cpuhp/1] root 18 0.0 0.0 0 0 ? S 00:34 0:00 [migration/1] root 19 0.0 0.0 0 0 ? S 00:34 0:00 [ksoftirqd/1] root 20 0.0 0.0 0 0 ? I 00:34 0:00 [kworker/1:0-ev root 21 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/1:0H-k root 22 0.0 0.0 0 0 ? S 00:34 0:00 [cpuhp/2] root 23 0.0 0.0 0 0 ? S 00:34 0:00 [migration/2] root 24 0.0 0.0 0 0 ? S 00:34 0:00 [ksoftirqd/2] root 26 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/2:0H-e root 27 0.0 0.0 0 0 ? S 00:34 0:00 [cpuhp/3] root 28 0.0 0.0 0 0 ? S 00:34 0:00 [migration/3] root 29 0.0 0.0 0 0 ? S 00:34 0:00 [ksoftirqd/3] root 30 0.0 0.0 0 0 ? I 00:34 0:00 [kworker/3:0-rc root 31 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/3:0H-k root 32 0.0 0.0 0 0 ? S 00:34 0:00 [cpuhp/4] root 33 0.0 0.0 0 0 ? S 00:34 0:00 [migration/4] root 34 0.0 0.0 0 0 ? S 00:34 0:00 [ksoftirqd/4] root 35 0.0 0.0 0 0 ? I 00:34 0:00 [kworker/4:0-ev root 36 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/4:0H-k root 37 0.0 0.0 0 0 ? S 00:34 0:00 [cpuhp/5] root 38 0.0 0.0 0 0 ? S 00:34 0:00 [migration/5] root 39 0.0 0.0 0 0 ? S 00:34 0:00 [ksoftirqd/5] root 40 0.0 0.0 0 0 ? I 00:34 0:00 [kworker/5:0-ev root 41 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/5:0H-e root 42 0.0 0.0 0 0 ? S 00:34 0:00 [cpuhp/6] root 43 0.0 0.0 0 0 ? S 00:34 0:00 [migration/6] root 44 0.0 0.0 0 0 ? S 00:34 0:00 [ksoftirqd/6] root 45 0.0 0.0 0 0 ? I 00:34 0:00 [kworker/6:0-ev root 46 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/6:0H-k root 47 0.0 0.0 0 0 ? S 00:34 0:00 [cpuhp/7] root 48 0.0 0.0 0 0 ? S 00:34 0:00 [migration/7] root 49 0.0 0.0 0 0 ? S 00:34 0:00 [ksoftirqd/7] root 50 0.0 0.0 0 0 ? I 00:34 0:00 [kworker/7:0-rc root 51 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/7:0H-k root 59 0.0 0.0 0 0 ? S 00:34 0:00 [kdevtmpfs] root 60 0.0 0.0 0 0 ? I< 00:34 0:00 [inet_frag_wq] root 61 0.0 0.0 0 0 ? I 00:34 0:01 [kworker/0:1-ev root 62 0.0 0.0 0 0 ? S 00:34 0:00 [oom_reaper] root 63 0.0 0.0 0 0 ? I< 00:34 0:00 [writeback] root 64 0.0 0.0 0 0 ? S 00:34 0:00 [kcompactd0] root 65 0.0 0.0 0 0 ? SN 00:34 0:00 [ksmd] root 66 0.0 0.0 0 0 ? SN 00:34 0:00 [khugepaged] root 71 0.0 0.0 0 0 ? I 00:34 0:01 [kworker/2:1-ev root 99 0.0 0.0 0 0 ? I< 00:34 0:00 [kblockd] root 100 0.0 0.0 0 0 ? I< 00:34 0:00 [blkcg_punt_bio root 101 0.0 0.0 0 0 ? I< 00:34 0:00 [md] root 102 0.0 0.0 0 0 ? I< 00:34 0:00 [hv_vmbus_con] root 103 0.0 0.0 0 0 ? I< 00:34 0:00 [hv_pri_chan] root 104 0.0 0.0 0 0 ? I< 00:34 0:00 [hv_sub_chan] root 105 0.0 0.0 0 0 ? I< 00:34 0:00 [rpciod] root 106 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/2:1H-k root 107 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/u17:0] root 108 0.0 0.0 0 0 ? I< 00:34 0:00 [xprtiod] root 113 0.0 0.0 0 0 ? S 00:34 0:00 [kswapd0] root 114 0.0 0.0 0 0 ? I< 00:34 0:00 [nfsiod] root 115 0.0 0.0 0 0 ? I< 00:34 0:00 [cifsiod] root 116 0.0 0.0 0 0 ? I< 00:34 0:00 [smb3decryptd] root 117 0.0 0.0 0 0 ? I< 00:34 0:00 [cifsfileinfopu root 118 0.0 0.0 0 0 ? I< 00:34 0:00 [cifsoplockd] root 119 0.0 0.0 0 0 ? I< 00:34 0:00 [deferredclose] root 120 0.0 0.0 0 0 ? I< 00:34 0:00 [xfsalloc] root 121 0.0 0.0 0 0 ? I< 00:34 0:00 [xfs_mru_cache] root 123 0.0 0.0 0 0 ? I 00:34 0:00 [kworker/4:1-mm root 124 0.0 0.0 0 0 ? I< 00:34 0:00 [nfit] root 125 0.0 0.0 0 0 ? S 00:34 0:00 [khvcd] root 126 0.0 0.0 0 0 ? I 00:34 0:00 [kworker/2:2-ev root 127 0.0 0.0 0 0 ? I 00:34 0:00 [kworker/6:1-mm root 128 0.0 0.0 0 0 ? S 00:34 0:00 [scsi_eh_0] root 129 0.0 0.0 0 0 ? I< 00:34 0:00 [bond0] root 130 0.0 0.0 0 0 ? I< 00:34 0:00 [scsi_tmf_0] root 131 0.0 0.0 0 0 ? I< 00:34 0:00 [vfio-irqfd-cle root 132 0.0 0.0 0 0 ? I< 00:34 0:00 [usbip_event] root 133 0.0 0.0 0 0 ? I< 00:34 0:00 [raid5wq] root 134 0.0 0.0 0 0 ? I< 00:34 0:00 [dm_bufio_cache root 135 0.0 0.0 0 0 ? S 00:34 0:00 [hv_balloon] root 136 0.0 0.0 0 0 ? I< 00:34 0:00 [mld] root 137 0.0 0.0 0 0 ? I< 00:34 0:00 [ipv6_addrconf] root 138 0.0 0.0 0 0 ? I< 00:34 0:00 [ceph-msgr] root 139 0.0 0.0 0 0 ? I 00:34 0:00 [kworker/1:1-mm root 140 0.0 0.0 0 0 ? I 00:34 0:00 [kworker/5:1-ev root 141 0.0 0.0 0 0 ? I 00:34 0:00 [kworker/3:1-ev root 142 0.0 0.0 0 0 ? I 00:34 0:00 [kworker/7:1-mm root 144 0.0 0.0 0 0 ? I< 00:34 0:00 [ext4-rsv-conve root 145 0.0 0.0 10796 4776 hvc1 Ss 00:34 0:00 /bin/login -f root 148 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/0:1H-k root 149 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/4:1H-k root 150 0.0 0.0 2156 1876 ? S 00:34 0:00 gns --socket 7 root 151 0.0 0.0 0 0 ? I 00:34 0:00 [kworker/0:2-hv root 153 0.0 0.0 2300 1512 ? Sl 00:34 0:14 localhost --por root 155 0.0 0.0 2296 1608 ? Sl 00:34 0:00 /init chrony 157 0.0 0.0 4300 1996 ? S 00:34 0:04 /sbin/chronyd root 159 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/6:1H-k root 160 0.0 0.0 0 0 ? S 00:34 0:00 [jbd2/sdc-8] root 161 0.0 0.0 0 0 ? I< 00:34 0:00 [ext4-rsv-conve root 164 0.0 0.0 2296 1604 ? Sl 00:34 0:00 /init root 167 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/3:1H-k root 169 0.0 0.1 121444 8432 ? Sl 00:34 0:00 /usr/bin/WSLGd wslg 172 0.0 0.6 774932 52960 ? Sl 00:34 0:00 /usr/bin/weston wslg 177 0.0 0.0 2172 1656 ? S 00:34 0:00 /init /mnt/c/Us message+ 178 0.0 0.0 8624 3876 ? S 00:34 0:00 /usr/bin/dbus-d wslg 179 0.0 0.1 234484 8176 ? Sl 00:34 0:00 /usr/bin/pulsea wslg 183 0.0 0.0 8492 360 ? Ss 00:34 0:00 /usr/bin/dbus-d root 184 0.0 0.0 2340 84 ? Sl 00:34 0:00 plan9 --control root 187 0.0 0.0 2300 96 ? Ss 00:34 0:00 /init root 188 0.0 0.0 2316 100 ? S 00:34 0:00 /init wslg 189 0.0 0.0 10176 5016 ? Ss 00:34 0:00 -bash root 190 0.0 0.0 2300 96 ? Ss 00:34 0:00 /init root 191 0.0 0.0 2316 100 ? S 00:34 0:00 /init wslg 192 0.0 0.0 10176 4864 ? Ss 00:34 0:00 -bash root 193 0.0 0.0 2300 96 ? Ss 00:34 0:00 /init root 194 0.0 0.0 2316 100 ? S 00:34 0:00 /init wslg 195 0.0 0.0 10176 5096 ? Ss+ 00:34 0:00 -bash root 196 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/1:1H-k root 359 0.0 0.0 0 0 ? I< 00:34 0:00 [kworker/7:1H-k wslg 415 0.0 0.0 10176 3468 ? S+ 00:35 0:00 -bash wslg 416 0.0 0.0 12396 6572 ? S+ 00:35 0:00 ssh root 469 0.0 0.0 0 0 ? I 00:59 0:00 [kworker/u16:0- root 470 0.0 0.0 0 0 ? I< 01:35 0:00 [kworker/5:1H-k wslg 471 0.0 0.0 10176 3412 ? S+ 01:56 0:00 -bash wslg 472 0.0 0.0 12424 6068 ? S+ 01:56 0:00 ssh root 772 0.0 0.0 0 0 ? I 02:00 0:00 [kworker/u16:1- root 1044 0.0 0.0 3628 2296 hvc1 R+ 14:42 0:00 ps aux --sort -
Full lock up again
I've also been regularly experiencing this for the past week or so. Have updated to the latest pre-release with no effect. Similar symptoms to above with shells not responding and wsl --shutdown hanging. Killing the task with taskkill -IM "wslservice.exe" /F does work though.
WSL version: 1.1.3.0
Kernel version: 5.15.90.1
WSLg version: 1.0.49
MSRDC version: 1.2.3770
Direct3D version: 1.608.2-61064218
DXCore version: 10.0.25131.1002-220531-1700.rs-onecore-base2-hyp
Windows version: 10.0.19045.2673
@rkm thank you for that command! Restarting every time was really annoying but I never found any commands that had the permissions they needed.
It seems to be related to the network of WSL2 or reboot, I observed the following condition: whenever the computer sleeps, the network card is shut down and the WSL2 VM is shut down, and when it wakes up again, WSL2 reboots due to some applications accessing WSL2 (Jetbrains IDE, VSCode in my case), and plan9 eats all CPUs
It doesn't happen every time, but it has happened many times, using the wsl --shutdown command and shutting down the wsl service can restart wsl
It seems to be related to the network of WSL2 or reboot, I observed the following condition: whenever the computer sleeps, the network card is shut down and the WSL2 VM is shut down, and when it wakes up again, WSL2 reboots due to some applications accessing WSL2 (Jetbrains IDE, VSCode in my case), and
plan9eats all CPUsIt doesn't happen every time, but it has happened many times, using the
wsl --shutdowncommand and shutting down thewsl servicecan restart wsl
Another evidence is that when I use the virtual NIC, change some network settings (restart WIFI while using the virtual NIC to bridge WIFI), it causes WSL2 to restart and the above problem occurs
I am experiencing the same lockups, with all
wslcommands hanging forever. It may be related to bringing my PC out of hibernate, but it's not an immediate effect if so. The only method I have found of killing wsl is to reboot.I have experienced this on two different distros (Debian and openSUSE). I first noticed it when enabling systemd support, but have reproduced the issue with and without it since.
Seems to happen when I wake windows from hibernate. WSL then grows gradually less responsive until it is using 100% CPU and is completely unresponsive.
Just had the lockup on 1.2.0.
Had the lockup again on 1.2.3.0. It is definitely more likely after long hibernation (many hours). @OneBlue as I mentioned, I am not able to do anything in wsl when this issue occurs.
Same issue again. @OneBlue how can we help you fix this?
Same lockup on 1.2.5.0.
Having the same symptoms as everyone else on my corporate laptop. I am also unable to run the taskkill command noted above due to an "Access is denied" error. I also concur that this seems to happen after I put the laptop to sleep and wake it up again.
@Mutmansky are you running command prompt as an administrator?
@Mutmansky are you running command prompt as an administrator?
No, since it's a corporate managed laptop, I don't typically have admin rights. I can request temporary admin rights, if I need to install something new or whatever, but having to do so every time WSL hangs is not a great solution.
Had the same issue PS as admin and taskkill -IM "wslservice.exe" /F solved it.
Looks same as #6982
Looks same as #6982
And #8696
@craigloewen-msft @OneBlue @pmartincic please give us an update on at least one of these issues. I am not sure if the issues are exactly the same. Sometimes WSL stops responding immediately after resuming and sometimes it takes a while.