multipass
multipass copied to clipboard
snap list multipass ssh-fs failed with unexpected signal SIGBUS
This follows a previous problem I had with mounting a folder in a qemu-based instance on MacOS https://github.com/canonical/multipass/issues/3221 @andrei-toterman helped me out that time and got it running again. The instance was running fine until I restarted my Mac. I then tried to start the instance again. It started okay but failed to mount the external folder and gave the error:
Removing mount "/Users/giles" from 'giles-adm': failed to obtain exit status for remote process 'sudo snap list multipass-sshfs': timeout
Trying to mount the folder again gives the error:
mount failed: Error enabling mount support in 'giles-adm'
Please install the 'multipass-sshfs' snap manually inside the instance.
I can run a shell inside the instance. There are no obvious errors in snapd.service status.
If I run the command sudo snap install multipass-sshfs
as suggested by the mount error, it gives:
fatal error: unexpected signal during runtime execution
[signal SIGBUS: bus error code=0x2 addr=0x562d1bff53fc pc=0x562d1b16be8d]
followed by a stack trace.
The command sudo snap list multipass ssh-fs
gives the same SIGBUS error.
I've tried restarting snapd.service and snapd.socket, but it doesn't help.
I'm not sure how to proceed from here. Any advice would be appreciated.
To Reproduce The system was running as normal. I stopped the multipass instance, restarted the mac and then started it again and got the above behaviour.
Expected behavior I should be able to mount the folder on the mac.
Logs Output of multipassd.og since starting the instance.
[2023-09-14T10:20:38.523] [info] [giles-adm] process state changed to Starting
[2023-09-14T10:20:38.527] [info] [giles-adm] process state changed to Running
[2023-09-14T10:20:38.528] [debug] [qemu-system-x86_64] [4489] started: qemu-system-x86_64 -accel hvf -drive file=/Library/Application Support/com.canonical.multipass/bin/../Resources/qemu/edk2-x86_64-code.fd,if=pflash,format=raw,readonly=on -cpu host -nic vmnet-shared,model=virtio-net-pci,mac=52:54:00:21:23:81 -device virtio-scsi-pci,id=scsi0 -drive file=/var/root/Library/Application Support/multipassd/qemu/vault/instances/giles-adm/ubuntu-18.04-server-cloudimg-amd64.img,if=none,format=qcow2,discard=unmap,id=hda -device scsi-hd,drive=hda,bus=scsi0.0 -smp 1 -m 4096M -qmp stdio -chardev null,id=char0 -serial chardev:char0 -nographic -cdrom /var/root/Library/Application Support/multipassd/qemu/vault/instances/giles-adm/cloud-init-config.iso
[2023-09-14T10:20:38.528] [info] [giles-adm] process started
[2023-09-14T10:20:38.529] [debug] [giles-adm] Waiting for SSH to be up
[2023-09-14T10:20:38.637] [debug] [giles-adm] QMP: {"QMP": {"version": {"qemu": {"micro": 0, "minor": 0, "major": 8}, "package": ""}, "capabilities": ["oob"]}}
[2023-09-14T10:20:38.641] [warning] [giles-adm] Could not open option rom 'kvmvapic.bin': No such file or directory
[2023-09-14T10:20:38.641] [warning] [qemu-system-x86_64]
[2023-09-14T10:20:38.660] [debug] [giles-adm] QMP: {"return": {}}
[2023-09-14T10:20:42.100] [debug] [giles-adm] QMP: {"timestamp": {"seconds": 1694683242, "microseconds": 99962}, "event": "RTC_CHANGE", "data": {"offset": -1, "qom-path": "/machine/unattached/device[10]"}}
[2023-09-14T10:20:43.100] [debug] [giles-adm] QMP: {"timestamp": {"seconds": 1694683242, "microseconds": 220890}, "event": "RTC_CHANGE", "data": {"offset": -1, "qom-path": "/machine/unattached/device[10]"}}
[2023-09-14T10:20:57.703] [debug] [giles-adm] QMP: {"timestamp": {"seconds": 1694683257, "microseconds": 703370}, "event": "NIC_RX_FILTER_CHANGED", "data": {"path": "/machine/unattached/device[21]/virtio-backend"}}
[2023-09-14T10:21:17.669] [debug] [ssh session] Executing 'cat /proc/loadavg | cut -d ' ' -f1-3'
[2023-09-14T10:21:19.550] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:118 read_stream(type = 0, timeout = -1):
[2023-09-14T10:21:19.550] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 15
[2023-09-14T10:21:19.550] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 0
[2023-09-14T10:21:19.550] [debug] [ssh session] Executing 'free -b | grep 'Mem:' | awk '{printf $3}''
[2023-09-14T10:21:19.570] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:118 read_stream(type = 0, timeout = -1):
[2023-09-14T10:21:19.570] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 9
[2023-09-14T10:21:19.570] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 0
[2023-09-14T10:21:19.570] [debug] [ssh session] Executing 'free -b | grep 'Mem:' | awk '{printf $2}''
[2023-09-14T10:21:19.593] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:118 read_stream(type = 0, timeout = -1):
[2023-09-14T10:21:19.593] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 10
[2023-09-14T10:21:19.593] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 0
[2023-09-14T10:21:19.593] [debug] [ssh session] Executing 'df -t ext4 -t vfat --total -B1 --output=used | tail -n 1'
[2023-09-14T10:21:19.604] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:118 read_stream(type = 0, timeout = -1):
[2023-09-14T10:21:19.604] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 11
[2023-09-14T10:21:19.604] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 0
[2023-09-14T10:21:19.604] [debug] [ssh session] Executing 'df -t ext4 -t vfat --total -B1 --output=size | tail -n 1'
[2023-09-14T10:21:19.615] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:118 read_stream(type = 0, timeout = -1):
[2023-09-14T10:21:19.615] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 12
[2023-09-14T10:21:19.615] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 0
[2023-09-14T10:21:19.615] [debug] [ssh session] Executing 'nproc'
[2023-09-14T10:21:19.628] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:118 read_stream(type = 0, timeout = -1):
[2023-09-14T10:21:19.628] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 2
[2023-09-14T10:21:19.628] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 0
[2023-09-14T10:21:19.672] [debug] [ssh session] Executing 'ip -brief -family inet address show scope global'
[2023-09-14T10:21:21.232] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:118 read_stream(type = 0, timeout = -1):
[2023-09-14T10:21:21.232] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 50
[2023-09-14T10:21:21.232] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 0
[2023-09-14T10:21:21.232] [debug] [ssh session] Executing 'cat /etc/os-release | grep 'PRETTY_NAME' | cut -d \" -f2'
[2023-09-14T10:21:21.251] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:118 read_stream(type = 0, timeout = -1):
[2023-09-14T10:21:21.251] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 19
[2023-09-14T10:21:21.251] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 0
[2023-09-14T10:21:24.818] [info] [sshfs-mount-handler] initializing mount /Users/giles => /Users/giles in 'giles-adm'
[2023-09-14T10:21:24.860] [debug] [ssh session] Executing 'which snap'
[2023-09-14T10:21:25.577] [debug] [ssh session] Executing 'sudo snap list multipass-sshfs'
[2023-09-14T10:21:25.665] [debug] [ssh session] Executing '[ -e /snap ]'
[2023-09-14T10:21:25.670] [info] [sshfs-mount-handler] Installing the multipass-sshfs snap in 'giles-adm'
[2023-09-14T10:21:25.670] [debug] [ssh session] Executing 'sudo snap install multipass-sshfs'
[2023-09-14T10:21:25.762] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:118 read_stream(type = 1, timeout = -1):
[2023-09-14T10:21:25.762] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:21:25.762] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:21:25.762] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:21:25.762] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:21:25.762] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:21:25.762] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:21:25.762] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:21:25.762] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:21:25.762] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:21:25.762] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 243
[2023-09-14T10:21:25.762] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 0
[2023-09-14T10:21:25.762] [warning] [sshfs-mount-handler] Failed to install 'multipass-sshfs': fatal error: unexpected signal during runtime execution
[signal SIGBUS: bus error code=0x2 addr=0x559c77ddb3fc pc=0x559c76f51e8d]
runtime stack:
runtime.throw({0x559c775d8476?, 0x7ffe7b51b670?})
/usr/lib/go-1.18/src/runtime/panic.go:992 +0x71
runtime.sigpanic()
/usr/lib/go-1.18/src/runtime/signal_unix.go:802 +0x3a9
runtime.isSystemGoroutine(0xc0000a1380?, 0x0)
/usr/lib/go-1.18/src/runtime/traceback.go:1117 +0x2d
runtime.newproc1(0xc00017e720, 0x559c77ee4a38?, 0x559c7734aeca)
/usr/lib/go-1.18/src/runtime/proc.go:4114 +0x16c
runtime.newproc.func1()
/usr/lib/go-1.18/src/runtime/proc.go:4056 +0x25
runtime.systemstack()
/usr/lib/go-1.18/src/runtime/asm_amd64.s:469 +0x46
goroutine 1 [running, locked to thread]:
runtime.systemstack_switch()
/usr/lib/go-1.18/src/runtime/asm_amd64.s:436 fp=0xc0001316d0 sp=0xc0001316c8 pc=0x559c76f59e80
runtime.newproc(0x0?)
/usr/lib/go-1.18/src/runtime/proc.go:4055 +0x51 fp=0xc000131708 sp=0xc0001316d0 pc=0x559c76f36971
fatal error: unexpected signal during runtime execution
panic during panic
[signal SIGBUS: bus error code=0x2 addr=0x559c77ddb390 pc=0x559c76f4e653]
runtime stack:
runtime.throw({0x559c775d8476?, 0x559c777663f8?})
/usr/lib/go-1.18/src/runtime/panic.go:992 +0x71
runtime.sigpanic()
/usr/lib/go-1.18/src/runtime/signal_unix.go:802 +0x3a9
runtime.gentraceback(0x7ffe7b51b6f8?, 0x0?, 0x8?, 0x0?, 0x0, 0x0, 0x64, 0x0, 0x7ffe7b51b340?, 0x0)
/usr/lib/go-1.18/src/runtime/traceback.go:138 +0x373
runtime.traceback1(0xc0000021a0?, 0x0?, 0x7ffe7b51b4e0?, 0xc0000021a0, 0x559c775b3625?)
/usr/lib/go-1.18/src/runtime/traceback.go:835 +0x1b1
runtime.traceback(...)
/usr/lib/go-1.18/src/runtime/traceback.go:782
runtime.tracebackothers(0x559c77eb02a0)
/usr/lib/go-1.18/src/runtime/traceback.go:1027 +0x92
runtime.dopanic_m(0x559c77eb02a0, 0x1?, 0x1?)
/usr/lib/go-1.18/src/runtime/panic.go:1192 +0x27c
runtime.fatalthrow.func1()
/usr/lib/go-1.18/src/runtime/panic.go:1047 +0x48
runtime.fatalthrow()
/usr/lib/go-1.18/src/runtime/panic.go:1044 +0x50
runtime.throw({0x559c775d8476?, 0x7ffe7b51b670?})
/usr/lib/go-1.18/src/runtime/panic.go:992 +0x71
runtime.sigpanic()
/usr/lib/go-1.18/src/runtime/signal_unix.go:802 +0x3a9
runtime.isSystemGoroutine(0xc0000a1380?, 0x0)
/usr/lib/go-1.18/src/runtime/traceback.go:1117 +0x2d
runtime.newproc1(0xc00017e720, 0x559c77ee4a38?, 0x559c7734aeca)
/usr/lib/go-1.18/src/runtime/proc.go:4114 +0x16c
runtime.newproc.func1()
/usr/lib/go-1.18/src/runtime/proc.go:4056 +0x25
runtime.systemstack()
/usr/lib/go-1.18/src/runtime/asm_amd64.s:469 +0x46
[2023-09-14T10:21:30.408] [debug] [giles-adm] QMP: {"timestamp": {"seconds": 1694683290, "microseconds": 408057}, "event": "RTC_CHANGE", "data": {"offset": 0, "qom-path": "/machine/unattached/device[10]"}}
[2023-09-14T10:24:50.536] [info] [sshfs-mount-handler] initializing mount /Users/giles => /Users/giles in 'giles-adm'
[2023-09-14T10:24:50.572] [debug] [ssh session] Executing 'which snap'
[2023-09-14T10:24:51.217] [debug] [ssh session] Executing 'sudo snap list multipass-sshfs'
[2023-09-14T10:24:51.300] [debug] [ssh session] Executing '[ -e /snap ]'
[2023-09-14T10:24:51.307] [info] [sshfs-mount-handler] Installing the multipass-sshfs snap in 'giles-adm'
[2023-09-14T10:24:51.308] [debug] [ssh session] Executing 'sudo snap install multipass-sshfs'
[2023-09-14T10:24:51.399] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:118 read_stream(type = 1, timeout = -1):
[2023-09-14T10:24:51.399] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:24:51.399] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:24:51.399] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:24:51.399] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:24:51.399] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:24:51.399] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:24:51.399] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:24:51.399] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:24:51.399] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 256
[2023-09-14T10:24:51.399] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 243
[2023-09-14T10:24:51.399] [debug] [ssh process] /Users/cibot/actions-runner/_work/multipass-private/multipass-private/src/ssh/ssh_process.cpp:136 read_stream(): num_bytes = 0
[2023-09-14T10:24:51.399] [warning] [sshfs-mount-handler] Failed to install 'multipass-sshfs': fatal error: unexpected signal during runtime execution
[signal SIGBUS: bus error code=0x2 addr=0x55deb7ca23fc pc=0x55deb6e18e8d]
runtime stack:
runtime.throw({0x55deb749f476?, 0x7ffd98bdd440?})
/usr/lib/go-1.18/src/runtime/panic.go:992 +0x71
runtime.sigpanic()
/usr/lib/go-1.18/src/runtime/signal_unix.go:802 +0x3a9
runtime.isSystemGoroutine(0xc000098ea0?, 0x0)
/usr/lib/go-1.18/src/runtime/traceback.go:1117 +0x2d
runtime.newproc1(0xc000176720, 0x55deb7daba38?, 0x55deb7211eca)
/usr/lib/go-1.18/src/runtime/proc.go:4114 +0x16c
runtime.newproc.func1()
/usr/lib/go-1.18/src/runtime/proc.go:4056 +0x25
runtime.systemstack()
/usr/lib/go-1.18/src/runtime/asm_amd64.s:469 +0x46
goroutine 1 [running, locked to thread]:
runtime.systemstack_switch()
/usr/lib/go-1.18/src/runtime/asm_amd64.s:436 fp=0xc0001296d0 sp=0xc0001296c8 pc=0x55deb6e20e80
runtime.newproc(0x0?)
/usr/lib/go-1.18/src/runtime/proc.go:4055 +0x51 fp=0xc000129708 sp=0xc0001296d0 pc=0x55deb6dfd971
fatal error: unexpected signal during runtime execution
panic during panic
[signal SIGBUS: bus error code=0x2 addr=0x55deb7ca2390 pc=0x55deb6e15653]
runtime stack:
runtime.throw({0x55deb749f476?, 0x55deb762d3f8?})
/usr/lib/go-1.18/src/runtime/panic.go:992 +0x71
runtime.sigpanic()
/usr/lib/go-1.18/src/runtime/signal_unix.go:802 +0x3a9
runtime.gentraceback(0x7ffd98bdd4c8?, 0x0?, 0x8?, 0x0?, 0x0, 0x0, 0x64, 0x0, 0x7ffd98bdd110?, 0x0)
/usr/lib/go-1.18/src/runtime/traceback.go:138 +0x373
runtime.traceback1(0xc0000021a0?, 0x0?, 0x7ffd98bdd2b0?, 0xc0000021a0, 0x55deb747a625?)
/usr/lib/go-1.18/src/runtime/traceback.go:835 +0x1b1
runtime.traceback(...)
/usr/lib/go-1.18/src/runtime/traceback.go:782
runtime.tracebackothers(0x55deb7d772a0)
/usr/lib/go-1.18/src/runtime/traceback.go:1027 +0x92
runtime.dopanic_m(0x55deb7d772a0, 0x1?, 0x1?)
/usr/lib/go-1.18/src/runtime/panic.go:1192 +0x27c
runtime.fatalthrow.func1()
/usr/lib/go-1.18/src/runtime/panic.go:1047 +0x48
runtime.fatalthrow()
/usr/lib/go-1.18/src/runtime/panic.go:1044 +0x50
runtime.throw({0x55deb749f476?, 0x7ffd98bdd440?})
/usr/lib/go-1.18/src/runtime/panic.go:992 +0x71
runtime.sigpanic()
/usr/lib/go-1.18/src/runtime/signal_unix.go:802 +0x3a9
runtime.isSystemGoroutine(0xc000098ea0?, 0x0)
/usr/lib/go-1.18/src/runtime/traceback.go:1117 +0x2d
runtime.newproc1(0xc000176720, 0x55deb7daba38?, 0x55deb7211eca)
/usr/lib/go-1.18/src/runtime/proc.go:4114 +0x16c
runtime.newproc.func1()
/usr/lib/go-1.18/src/runtime/proc.go:4056 +0x25
runtime.systemstack()
/usr/lib/go-1.18/src/runtime/asm_amd64.s:469 +0x46
[2023-09-14T10:32:38.409] [debug] [giles-adm] QMP: {"timestamp": {"seconds": 1694683958, "microseconds": 408926}, "event": "RTC_CHANGE", "data": {"offset": 0, "qom-path": "/machine/unattached/device[10]"}}
Please provide logs from the daemon, see accessing logs on where to find them on your platform.
Additional info
- OS: MacOS 12.6.9
-
multipass version
multipass 1.12.2+mac multipassd 1.12.2+mac -
multipass info --all
Name: giles-adm State: Running IPv4: 192.168.64.15 Release: Ubuntu 18.04.5 LTS Image hash: 01080e9e2908 (Ubuntu 18.04 LTS) CPU(s): 1 Load: 1.85 1.97 1.66 Disk usage: 8.3GiB out of 9.6GiB Memory usage: 982.1MiB out of 3.8GiB Mounts: -- -
multipass get local.driver
qemu
Hi @gillez , thanks a lot for the detailed report. It looks like that the installation of multipass-sshfs
just can not succeed either manually or invoked by multipass. It could be a bug in snapd. Anyway, based on the discussion of the previous issue, delete the snapd state file and restart snapd seems to solve it. Did you try that in this case already?
Hi @georgeliao, thanks for the reply. Yes, in this case I tried deleting the snapd.state file then restarting snapd.service and shapd.socket, but it didn't make any difference.
@gillez , based on the golang runtime error in the log, it is looks like it is a bug in snapd. Can you show us the snapd version you have by running snap --version
? Maybe some upgrade of that can solve it.
Hi @georgeliao, I am currently using the hyperkit driver, where the image is running fine. I'm not sure if the version of snap will be different to the image in the qemu driver? It was automatically created from the hyperkit version when changing drivers for the first time. The snap --version in the hyperkit driver gives the following output:
snap 2.60.3 snapd 2.60.3 series 16 ubuntu 18.04 kernel 4.15.0-126-generic
When I reach a convenient point I will switch back to the qemu driver and check the snap version there. However, it's always a pain when I switch between the drivers - the image usually fails to load when I go back to hyperkit and I have to restart the multipass service and, occasionally, the Mac. Threfore I don't want to do it at the moment. I'll update this thread when I can do it.
Hi @georgeliao
Sorry for the delay. I finally have a chance to switch back to the qemu driver to find out the version of snap...
Unfortunately snap --version
gives the same fatal error as other snap commands!
~$ snap --version
fatal error: unexpected signal during runtime execution
[signal SIGBUS: bus error code=0x2 addr=0x55ed5fd383fc pc=0x55ed5eeaee8d]
runtime stack:
runtime.throw({0x55ed5f535476?, 0x7ffd70bb39e0?})
/usr/lib/go-1.18/src/runtime/panic.go:992 +0x71
runtime.sigpanic()
/usr/lib/go-1.18/src/runtime/signal_unix.go:802 +0x3a9
runtime.isSystemGoroutine(0xc0000a51e0?, 0x0)
/usr/lib/go-1.18/src/runtime/traceback.go:1117 +0x2d
runtime.newproc1(0xc00011c720, 0x55ed5fe41a38?, 0x55ed5f2a7eca)
/usr/lib/go-1.18/src/runtime/proc.go:4114 +0x16c
runtime.newproc.func1()
/usr/lib/go-1.18/src/runtime/proc.go:4056 +0x25
runtime.systemstack()
/usr/lib/go-1.18/src/runtime/asm_amd64.s:469 +0x46
goroutine 1 [running, locked to thread]:
runtime.systemstack_switch()
/usr/lib/go-1.18/src/runtime/asm_amd64.s:436 fp=0xc0001336d0 sp=0xc0001336c8 pc=0x55ed5eeb6e80
runtime.newproc(0x0?)
/usr/lib/go-1.18/src/runtime/proc.go:4055 +0x51 fp=0xc000133708 sp=0xc0001336d0 pc=0x55ed5ee93971
fatal error: unexpected signal during runtime execution
panic during panic
[signal SIGBUS: bus error code=0x2 addr=0x55ed5fd38390 pc=0x55ed5eeab653]
runtime stack:
runtime.throw({0x55ed5f535476?, 0x55ed5f6c33f8?})
/usr/lib/go-1.18/src/runtime/panic.go:992 +0x71
runtime.sigpanic()
/usr/lib/go-1.18/src/runtime/signal_unix.go:802 +0x3a9
runtime.gentraceback(0x7ffd70bb3a68?, 0x0?, 0x8?, 0x0?, 0x0, 0x0, 0x64, 0x0, 0x7ffd70bb36b0?, 0x0)
/usr/lib/go-1.18/src/runtime/traceback.go:138 +0x373
runtime.traceback1(0xc0000021a0?, 0x0?, 0x7ffd70bb3850?, 0xc0000021a0, 0x55ed5f510625?)
/usr/lib/go-1.18/src/runtime/traceback.go:835 +0x1b1
runtime.traceback(...)
/usr/lib/go-1.18/src/runtime/traceback.go:782
runtime.tracebackothers(0x55ed5fe0d2a0)
/usr/lib/go-1.18/src/runtime/traceback.go:1027 +0x92
runtime.dopanic_m(0x55ed5fe0d2a0, 0x1?, 0x1?)
/usr/lib/go-1.18/src/runtime/panic.go:1192 +0x27c
runtime.fatalthrow.func1()
/usr/lib/go-1.18/src/runtime/panic.go:1047 +0x48
runtime.fatalthrow()
/usr/lib/go-1.18/src/runtime/panic.go:1044 +0x50
runtime.throw({0x55ed5f535476?, 0x7ffd70bb39e0?})
/usr/lib/go-1.18/src/runtime/panic.go:992 +0x71
runtime.sigpanic()
/usr/lib/go-1.18/src/runtime/signal_unix.go:802 +0x3a9
runtime.isSystemGoroutine(0xc0000a51e0?, 0x0)
/usr/lib/go-1.18/src/runtime/traceback.go:1117 +0x2d
runtime.newproc1(0xc00011c720, 0x55ed5fe41a38?, 0x55ed5f2a7eca)
/usr/lib/go-1.18/src/runtime/proc.go:4114 +0x16c
runtime.newproc.func1()
/usr/lib/go-1.18/src/runtime/proc.go:4056 +0x25
runtime.systemstack()
/usr/lib/go-1.18/src/runtime/asm_amd64.s:469 +0x46
FWIW, I would rather not use the qemu driver. I have nothing but problems with it. It seems to work fine when I'm running one instance, then I'll try to start another one that was working fine a few days before and it won't start. Or it'll start but I can't mount a drive. Or some other problem. Sometimes a multipass service restart fixes it, sometimes it doesn't. Similar for a Mac restart. I've had these problems a lot with the qemu driver.
The hyperkit driver may be deprecated and due to be removed in a future release but at the moment it seems way more stable. Although I still have problems with stopping and starting instances. Stopping an instance usually hangs for ages then crashes with the error [error] [giles-adm] process error occurred Crashed
and I have to restart the multipass service before I can start it again. But this is still way more stable and reliable that the qemu driver!
Sorry to moan. If there's anything I can do to help make the qemu driver more stable I'm happy to help. But for my normal work I'll stick with hyperkit until it's removed.
Don't know if it helps, but just FYI I have three instances on the qemu driver and all of them were migrated automatically from hyperkit. I haven't tried creating a new instance in the qemu driver. Maybe that would be more stable?
hi, @gillez
That snap thing is definitely a bug that lies in snapd and that is something we need to report to the snapd team. However, our team does have a hard time reproducing that.
Sorry to hear about the bad experience you had with qemu. Since you mentioned that they are all migrated instances, maybe launching a new instance via qemu can be an option.
Thanks @georgeliao, I am sure the Snap bug is difficult to reproduce although it happens every time for me and restarts of the instance, service or Mac do not fix it. It's obviously just some state it's got into.
I can try backing up the data from my hyperkit instances, then switching to qemu and creating a new instance then restoring my data. It's just a bit more time consuming than the migration method. I'll give this a go though. If other people don't have problems with the qemu driver generally with MacOS 12.6 then maybe that will sort it out and new images will be super-reliable.
BTW, do you have any idea when the hyperkit driver will be removed?
It's obviously just some state it's got into.
Indeed, it is a nasty state the qemu instance got into which caused the snapd behaves like this.
I can try backing up the data from my hyperkit instances, then switching to qemu and creating a new instance then restoring my data. It's just a bit more time consuming than the migration method.
This likely will work because the state of the hyperkit instance is still clean.
If other people don't have problems with the qemu driver generally with MacOS 12.6 then maybe that will sort it out and new images will be super-reliable.
This is also true, most of the users did not experience the instability you have experienced.
BTW, do you have any idea when the hyperkit driver will be removed?
I think it will be multipass version 1.13, when exactly that will be released is not clear.