kubevirt
kubevirt copied to clipboard
Unable to create VM on Kubernetes 1.30 using Kubevirt 1.5.0
What happened: I have deployed kubevirt 1.5.0 on kubernetes 1.30. There were no issue on deployment but as soon as we spin the vm it fails. We are using cdi and ceph rbd for volume and pvc. This combination works perfectly on version 1.28 of Kubernetes.
What you expected to happen: Its expected to work the same way as it works on k8s 1.28
How to reproduce it (as minimally and precisely as possible): Deploy Kubevirt on k8s 1.30 and spin a vm using ceph rbd as data volume.
Additional context: Add any other context about the problem here.
Environment:
- KubeVirt version (use
virtctl version): 1.50 - Kubernetes version (use
kubectl version): 1.30 - VM or VMI specifications: oel9
- Cloud provider or hardware configuration: N/A
- OS (e.g. from /etc/os-release): oel9
- Kernel (e.g.
uname -a): N/A - Install tools: N/A
- Others: N/A
Logs: {"component":"virt-launcher","level":"info","msg":"Collected all requested hook sidecar sockets","pos":"manager.go:88","timestamp":"2025-06-10T14:16:10.379441Z"} {"component":"virt-launcher","level":"info","msg":"Sorted all collected sidecar sockets per hook point based on their priority and name: map[]","pos":"manager.go:91","timestamp":"2025-06-10T14:16:10.379562Z"} {"component":"virt-launcher","level":"info","msg":"Connecting to libvirt daemon: qemu:///system","pos":"libvirt.go:547","timestamp":"2025-06-10T14:16:10.380769Z"} {"component":"virt-launcher","level":"info","msg":"Connecting to libvirt daemon failed: virError(Code=38, Domain=7, Message='Failed to connect socket to '/var/run/libvirt/virtqemud-sock': No such file or directory')","pos":"libvirt.go:555","timestamp":"2025-06-10T14:16:10.381751Z"} {"component":"virt-launcher","level":"info","msg":"libvirt version: 10.10.0, package: 4.el9 ([email protected], 2025-01-16-13:06:37, )","subcomponent":"libvirt","thread":"40","timestamp":"2025-06-10T14:16:10.422000Z"} {"component":"virt-launcher","level":"info","msg":"hostname: kubemaster51","subcomponent":"libvirt","thread":"40","timestamp":"2025-06-10T14:16:10.422000Z"} {"component":"virt-launcher","level":"error","msg":"internal error: Child process (dmidecode -q -t 0,1,2,3,4,11,17) unexpected exit status 1: Can't read memory from /dev/mem","pos":"virCommandWait:2770","subcomponent":"libvirt","thread":"40","timestamp":"2025-06-10T14:16:10.422000Z"} {"component":"virt-launcher","level":"info","msg":"Connected to libvirt daemon","pos":"libvirt.go:563","timestamp":"2025-06-10T14:16:10.885384Z"} {"component":"virt-launcher","level":"info","msg":"Registered libvirt event notify callback","pos":"client.go:573","timestamp":"2025-06-10T14:16:10.892958Z"} {"component":"virt-launcher","level":"info","msg":"Marked as ready","pos":"virt-launcher.go:76","timestamp":"2025-06-10T14:16:10.893349Z"} {"component":"virt-launcher-monitor","level":"info","msg":"signalling virt-launcher to shut down","pos":"virt-launcher-monitor.go:204","timestamp":"2025-06-10T14:16:13.590059Z"} {"component":"virt-launcher","level":"info","msg":"Received signal terminated","pos":"virt-launcher.go:474","timestamp":"2025-06-10T14:16:13.590740Z"} {"component":"virt-launcher","level":"info","msg":"stopping cmd server","pos":"server.go:625","timestamp":"2025-06-10T14:16:13.591095Z"} {"component":"virt-launcher","level":"info","msg":"cmd server stopped","pos":"server.go:634","timestamp":"2025-06-10T14:16:13.591378Z"} {"component":"virt-launcher","level":"info","msg":"Exiting...","pos":"virt-launcher.go:513","timestamp":"2025-06-10T14:16:13.591470Z"} {"component":"virt-launcher-monitor","level":"info","msg":"Reaped pid 24 with status 9","pos":"virt-launcher-monitor.go:202","timestamp":"2025-06-10T14:16:13.592660Z"} {"component":"virt-launcher-monitor","level":"info","msg":"Reaped pid 23 with status 9","pos":"virt-launcher-monitor.go:202","timestamp":"2025-06-10T14:16:13.593465Z"} {"component":"virt-launcher-monitor","level":"info","msg":"Reaped Launcher main pid","pos":"virt-launcher-monitor.go:199","timestamp":"2025-06-10T14:16:13.596617Z"} {"component":"virt-launcher-monitor","level":"info","msg":"Reaped pid 8 with status 0","pos":"virt-launcher-monitor.go:202","timestamp":"2025-06-10T14:16:13.596676Z"} {"component":"virt-launcher-monitor","level":"info","msg":"virt-launcher-monitor: Exiting...","pos":"virt-launcher-monitor.go:145","timestamp":"2025-06-10T14:16:21.410961Z"}
Hi @AishwaryaPreet07! I'm Dosu and I’m helping the kubevirt team.
This is a known issue with KubeVirt 1.5.0: the error Can't read memory from /dev/mem is caused by a bug in the cellToCell function, which leads to a nil pointer dereference during VM startup. This bug is being addressed in PR #14145 and will be fixed in a future release. Until then, the recommended workaround is to use an earlier version of KubeVirt (such as 1.2.1 or 1.3.1), which does not have this issue [source].
The Failed to connect socket to '/var/run/libvirt/virtqemud-sock': No such file or directory error is a symptom of this underlying problem and indicates that the VM process cannot establish a connection to the libvirt daemon, likely due to the crash caused by the bug above.
There are also recent changes in KubeVirt 1.5.0 regarding device ownership and security context handling, which can cause permission issues with certain storage backends or network plugins. If you are using AppArmor or SELinux, ensure that your node security settings are not interfering with KubeVirt's ability to access required devices [source].
If you need to stay on KubeVirt 1.5.0, you can try the following as a temporary workaround:
- Make sure AppArmor and SELinux are configured to allow QEMU and libvirt to access necessary resources.
- If SELinux is disabled or in permissive mode, consider using a DaemonSet to mount a tmpfs over
/sys/fs/selinuxto prevent KubeVirt from detecting SELinux [source]. - For AppArmor on Ubuntu, set
kernel.apparmor_restrict_unprivileged_userns=0to disable restrictions on unprivileged user namespaces [source].
Otherwise, the most reliable solution is to wait for the next KubeVirt release with the fix, or downgrade to a version that is known to work with your setup.
To reply, just mention @dosu.
How did I do? Good | Irrelevant | Incorrect | Verbose | Hallucination | Report 🐛 | Other
@AishwaryaPreet07 Can I ask you to try with version 1.5.2? As Dosu mentioned, there was an issue with 1.5.0. Thanks
If Kubevirt 1.5.0 has an issue then how come its running perfectly fine on kubernetes 1.28?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
/close
@kubevirt-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity. Reopen the issue with
/reopen. Mark the issue as fresh with/remove-lifecycle rotten./close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.