Low throughput or "No frames detected" - Windows 10 inside VM
| Required Info | |
|---|---|
| Camera Model | D435i |
| Firmware Version | 5.15.1 |
| Operating System & Version | Win (10) |
| Kernel Version (Linux Only) | . |
| Platform | PC |
| SDK Version | 2.54.2 |
| Language | Python/Realsense Viewer |
| Segment | Robot |
Hey! I have an issue using the Realsense Viewer inside a Windows 10 virtual machine.
The problem: The overall throughput of frames is miserable. The max I can get is [email protected] from the RGB, or [email protected] FPS from the depth camera. But the frame rates are fluctuating when they work, and usually they stop working after some time. Most of the time, the streams won't even start ("No frames received"). The overall Realsense Viewer performance is laggy and stuttering. It's usually at 30fps idling, but whenever I want to start a stream it lags for some seconds and the applications FPS drop to 5-ish. rs-capture delivers somewhat similar framerates, sometimes it won't start at all (probably in the "no frames received"-case). I ultimately need to use the pyrealsense2 library in my project, there I'm seeing the same behavior ("frame didnt arrive within ..." or very low framerates at already low resolutions and FPS settings). I already disabled the GLSL options in the viewer with no noticeable change.
The setup: It's a Windows 10 VM running on a Windows 10 Host (for permission reasons I do need to be in this VM). I can pass the camera to the VM using the USB3 controller, it also shows up in the Realsense Viewer as USB3.2. The host machine is not especially powerful, but should be sufficient for the task. It has an i5-4590 and 8GB RAM. I forward 3 of the 4 cores and 6GB to the VM. I don't need constant framerates for my application, I have a flexible amount of time to receive around 20-50 frames, but they should be in the best available quality.
Some things I tried and their results:
- using a different Windows 10 x64 VM -> same behavior
- using different host machine with more powerful cores (i5-13600KF) -> same behavior, just the viewer running more smoothly
- using different VM settings (more cores and way more RAM) -> same behavior, just the viewer running more smoothly
- using the Viewer on a (different) host machine -> working as expected
- using different USB ports + cables -> same behavior
- opening the realsense-viewer without the camera connected inside the VM -> no laggy behavior, opening fast
This all made me think it is probably some virtualization bug. But using an Ubuntu VM with same hardware settings (cores+ram) it works perfectly, even on the desired host. For final deployment, I am forced to use Windows, so that is not an option.
Do you have any idea what causes these issues and how I can solve them?
Hi @lwag-s The link below provides information about similar cases where performance was very low on a Windows VM.
https://support.intelrealsense.com/hc/en-us/community/posts/1500000585541-D435-Depth-No-frames-received
There are very few references regarding using RealSense on a Windows VM. However, a key difference between the Linux version of librealsense and the Windows one is that the Linux one is based on the V4L2 Backend whilst the Windows SDK is based on the Microsoft Media Foundation backend.
It is possible to build the Windows SDK from source code with CMake instead of using the installer program. Setting the build flag FORCE_RSUSB_BACKEND to True in the CMake build settings should build librealsense on Windows based on a WinUSB rewrite of the UVC protocol instead of Media Foundation.
https://github.com/IntelRealSense/librealsense/blob/master/doc/installation_windows.md
Hello @MartyG-RealSense ! Thanks for your reply. I did actually look into both the topic on your support page as well as the GitHub issue. Unfortunately, both were unsolved.
I will try to build the SDK from source and will get back to you shortly.
Hello again, I was able to build the SDK from source with and without the flag set, which came to a surprise because it was my first time building a repo from source.
I ran the RealSense-viewer executable without the flag, the behavior was the same as the pre-built packages. With the flag, the RealSense-viewer did start quicker, but it wasn't able to find a connected device. On the starting screen it said "Please connect a RealSense device", then it moved to the main window, but also without a connected device.
Process: generate the project in cmake, open it in Visual Studio, make the ALL_BUILD target the start project, set the target to "Release" and create with CTRL+B, then run the "\build\Release\realsense-viewer.exe"
Did I miss anything?
Which VM tool are you using on Windows, please? It needs to be one that can emulate the USB3 controller, otherwise USB devices such as RealSense cameras will not be able to be detected. For this reason, Intel suggest using VMware Workstation Player and not Oracle VirtualBox
I am using VMWare Workstation and VMWare Workstation player.
The camera is detected properly in the prebuilt and builds without the flag set:
I ran the RealSense-viewer executable without the flag, the behavior was the same as the pre-built packages.
Can you confirm please whether you have you tried the standalone pre-built .exe version of the RealSense Viewer (Intel.RealSense.Viewer.exe) that can be downloaded and run from the 'Assets' list of the SDK Releases page?
https://github.com/IntelRealSense/librealsense/releases/tag/v2.54.2
Can confirm I did try that. Also tried v2.50.0 for reference, same behavior.
I decided to widen my research to any cases where a VM was slow on Windows - not just with RealSense - by googling for the search term "windows" "vm" "camera" "slow"
The link below found in search results for that term has a suggestion for improving performance on VMware Workstation.
https://www.reddit.com/r/vmware/comments/m0w5x0/vmware_workstation_16_ui_not_vm_very_slow_on/
My VMWare works just fine.
I was thinking about forcing a USB2 connection, but after selecting the corresponding USB controller in VMWare the device reconnects and is after that an "unknown device" in device manager. also, the RealSense viewer didnt recognize the device.
Something I considered was routing the device not via VMWare, but via either USB device over network or whether I could use something like this: https://dev.intelrealsense.com/docs/open-source-ethernet-networking-for-intel-realsense-depth-cameras I would need to set the server up on the host and connect to it on the VM. I have two questions: first, I need a server running on the Windows host instead of raspbian like in the article. Secondly, the current generation realsense-viewer does not allow remote connections. I'd love to hear your opinion about this. Thanks!
The ethernet networking tool was removed from the librealsense SDK in version 2.54.1, so a version older than that such as 2.53.1 or 2.51.1 would need to be used.
Using Windows as a remote server would not work right away as the rs-server component can only be compiled on Linux.
https://dev.intelrealsense.com/docs/open-source-ethernet-networking-for-intel-realsense-depth-cameras#32-building-from-source
It may be possible to compile rs-server on a Linux computer and then use it on Windows, though I cannot recall it having been attempted by RealSense users in past cases.
Intel are planning to introduce a new networking component in an upcoming SDK version, but there is not further information available about it at the time of writing this.
Thanks for the answer. As far as I can tell there is no prebuilt server, correct? I could try starting the server in the aforementioned ubuntu VM, but I cant build it there.
Besides this, i don’t see a reasonable solution for my problem, do you?
The only pre-built server image available is for the Raspberry Pi 4 computing board.
https://dev.intelrealsense.com/docs/open-source-ethernet-networking-for-intel-realsense-depth-cameras#23-preparing-the-sd-card
As you found that realsense-viewer has no lag when the camera is not connected, this suggests that the Windows virtualization might be struggling with the camera specifically and so any method of streaming the camera in VM might have similar lag, unfortunately.
Is there any way to force a USB2 mode via UI or camera? I don't need 15 or 30 FPS, it'd be enough to have a stable and working system. As I mentioned, setting VMware to use USB2 does not recognize the camera correctly
If the USB connector is inserted only about three quarters of the way into the USB port instead of all the way then RealSense can be misled into detecting it as a USB2 connection.
Another way for the camera to be detected as USB2 would be to use a USB2 cable or plug the camera into a USB2 hub.
I could force USB 2 using an old extension cable that only supports USB2. Same behavior still.
One thing I noticed which may be a hint to you: Whenever the RealSense-viewer works properly, it starts up with a short animation of the Realsense icon. (At the screen where "Loading Intel RealSense Viewer v2.54.2" and "Realsense device detected" is). When the bug occurs, the animation is not shown. Do you think this is could be correlated to my issue?
If you run the text-only rs-hello-realsense example program and it has a good FPS then this could indicate that your VM is having problems with the smooth rendering of graphics.
Using rs-hello-realsense I get the same error like when using pyrealsense: Frame didnt arrive within 15000 So Im assuming it's an I/O problem instead of graphic. Are there any logs that can give more information ?
You can view a real-time debug log in the RealSense Viewer by clicking on a small upward pointing arrow icon in the bottom corner of the Viewer window to expand open its debug console.
Okay, these logs indicate nothing further. I read you had committed to network streaming tools like gstreamer in combination with RealSense. If I want to run a second, lightweight VM only for a realsense server, can I just use the prebuilt packages or do I need to build them from the linux VM also?
At the time of writing this after the removal of the rs-server tool, the only official RealSense networking interface that remains available is a Python-based one called EtherSense.
https://github.com/IntelRealSense/librealsense/tree/master/wrappers/python/examples/ethernet_client_server
https://dev.intelrealsense.com/docs/depth-camera-over-ethernet-whitepaper
Intel plan to introduce a new networking interface in a future SDK release.
I am not aware of future plans to create an official GStreamer plugin, though unofficial ones have been created by RealSense users, such as https://github.com/WKDSMRT/realsense-gstreamer and the one at https://github.com/IntelRealSense/librealsense/issues/2503#issuecomment-695339593
Intel do not support or recommend use of RealSense cameras with a VM, though RealSense users can choose to attempt to use a VM and I appreciate that a number of RealSense users do so with tools such as VMWare and other VM tools that provide USB3 emulation.
Would it be acceptable for your situation to use Linux on your Windows PC with its Windows Subsystem for Linux (WSL2) feature like the RealSense user at https://github.com/IntelRealSense/librealsense/issues/11401
https://learn.microsoft.com/en-us/windows/wsl/about
I do not have advice to offer about installing and using a VM tool with RealSense unfortunately.
A WSL inside the VM is not really reasonable, because it'd mean a VM running in VM.
I fear your answer, but is there any option to get access to a pre-release build for that networking-capable SDK? I am working with time-constraints and the system needs to be working at the end of this month.
The next SDK release is planned for Q1 2024 (a January to March time window) but I do not have information about which future release the networking interface will be introduced in.
If you only need to stream camera data and not interact with it, another option might be:rendering the streams in a browser window to see whether it renders more smoothly than a RealSense application.. Two methods for doing so can be found at https://github.com/IntelRealSense/librealsense/issues/6047#issue-580047917 and the link below.
https://www.youtube.com/watch?v=eV5NIPKC_pc
I note though that you believe the lag is an I/O issue and not graphics.
Unfortunately I need to access both RGB and depth information for my project.
The depth2web project at the YouTube link has display buttons for both the depth and color streams of the camera.
The source code for depth2web is here: https://github.com/js6450/depth2web
I found the Depth2Web project and watched the video some days ago. Im asking myself whether it is possible to create a python wrapper for a depth2web client, so that I can retrieve the RGB frame as well as a frame containing the depth information (so the distances). The colorized depth frame is nice to have but not enough.
The only RealSense web app I know of that is Python code and can display both depth and RGB is Remote-RealSense.
https://github.com/soarwing52/Remote-Realsense
Yes, but that’s a Webserver running inside python code. I’ve been thinking about different approaches to solve the problem.I could split my script and exchange using a database.I could also build a Webserver myself and somehow transfer the data from the host to the script inside the VM. These are all no solutions to the actual problem sadly.
One thing I noticed using older releases of the viewer: When I boot viewers until 2.51 I get an error at startup:
Invalid Value in rs2_get_option(options: ###, option:Auto Gain Limit): hwmon command 0x80 ( 5 0 0 0 ) failed (response -7=HW not ready)
Can this be a hint ?