multipass
multipass copied to clipboard
Launch failed: timed out waiting for initialization to complete
Describe the bug The multipass instance - even if it has successfully launched, will not exit the waiting step until the timeout error occurs.
To Reproduce How, and what happened?
multipass launch --name microstack --cpus 5 --memory 18G --disk 60G- Wait for timeout error
launch failed: The following errors occurred: timed out waiting for initialization to complete multipass shell microstack- Shell into the instance successfully.
Expected behavior What did you expect to happen? To not timeout.
Logs Please provide logs from the daemon, see accessing logs on where to find them on your platform.
The following are grepped error logs (there are quite a lot of logs)
Apr 13 23:12:42 nucnucwhosthere multipassd[1017]: Error getting https://codeload.github.com/canonical/multipass-blueprints/zip/refs/heads/main: Host codeload.github.com not found - trying cache.
Apr 13 23:12:42 nucnucwhosthere multipassd[1017]: Error getting https://cloud-images.ubuntu.com/releases/streams/v1/index.json: Host cloud-images.ubuntu.com not found - trying cache.
Apr 13 23:12:42 nucnucwhosthere multipassd[1017]: Error getting https://cloud-images.ubuntu.com/buildd/daily/streams/v1/index.json: Host cloud-images.ubuntu.com not found - trying cache.
Apr 13 23:12:42 nucnucwhosthere multipassd[1017]: Error getting https://cloud-images.ubuntu.com/daily/streams/v1/index.json: Host cloud-images.ubuntu.com not found - trying cache.
Apr 13 23:12:42 nucnucwhosthere multipassd[1017]: Error getting https://cloud-images.ubuntu.com/buildd/daily/streams/v1/com.ubuntu.cloud:daily:download.json: Host cloud-images.ubuntu.com not found - trying cache.
Apr 13 23:22:51 nucnucwhosthere dnsmasq[2616]: error binding DHCP socket to device mpqemubr0
Apr 13 23:24:09 nucnucwhosthere dnsmasq[44004]: error binding DHCP socket to device mpqemubr0
Additional info
- OS: Ubuntu 22.04.4 LTS
multipass version: multipass 1.13.1, multipassd 1.13.1multipass infoName: microstack State: Running Snapshots: 0 IPv4: 10.107.183.254 Release: Ubuntu 22.04.4 LTS Image hash: 304983616fcb (Ubuntu 22.04 LTS) CPU(s): 5 Load: 0.00 0.02 0.01 Disk usage: 1.8GiB out of 58.1GiB Memory usage: 253.8MiB out of 17.6GiB Mounts: --multipass get local.driver
Additional context
Add any other context about the problem here.
Might be related to https://github.com/canonical/multipass/issues/3464, the multipassd was restarted (sudo snap restart multipass.multipassd) due to launch failed: Remote "" is unknown or unreachable.
Hey, @yanksyoon! Sorry to hear that you're having issues. Could you please provide more logs? No need to grep them. Perhaps you could try to launch with -vvvv to get them. Thanks!
Sorry - right after I closed the issue i was able to reproduce.
$ multipass launch --name microstack --cpus 5 --memory 18G --disk 60G -vvvv
launch failed: Remote "" is unknown or unreachable.
after some time
❯ multipass launch -vvvv --name microstack --cpus 5 --memory 18G --disk 60G
[2024-04-18T12:56:22.591] [trace] [url downloader] Found https://codeload.github.com/canonical/multipass-blueprints/zip/refs/heads/main in cache: true
[2024-04-18T12:56:22.593] [debug] [blueprint provider] Loading "anbox-cloud-appliance" v1
[2024-04-18T12:56:22.594] [debug] [blueprint provider] Loading "charm-dev" v1
[2024-04-18T12:56:22.595] [debug] [blueprint provider] Loading "docker" v1
[2024-04-18T12:56:22.595] [debug] [blueprint provider] Loading "jellyfin" v1
[2024-04-18T12:56:22.596] [debug] [blueprint provider] Loading "minikube" v1
[2024-04-18T12:56:22.597] [debug] [blueprint provider] Loading "ros-noetic" v1
[2024-04-18T12:56:22.597] [debug] [blueprint provider] Loading "ros2-humble" v1
launch failed: Remote "" is unknown or unreachable.
Thanks, @yanksyoon! But I don't think this is related to the issue you first described. In this 'Remote is unknown' issue, the launch fails completely, and it is indeed a known issue. But initially you described an issue in which the launch command fails after a timeout, but the instance is still launched and is operable. Can you reproduce that? Or provide some logs around the time this happened?
Hi @yanksyoon!
Could you please provide the answers that @andrei-toterman asked? We cannot proceed with this until we have more details. If we don't hear back from you soon, we'll close this bug as incomplete. Thanks!
Ah sorry - I can't quite reproduce it atm and the logs are no longer available, closing. I'll keep the logs next time this happens. Thank you!