How to increase CPU/GPU utilization for gazebo work
Environment
- OS Version: Ubuntu 22.04
- Binary Gazebo Garden from official apt / Harmonic / Fortress
- CPU (12 threads ) / GPU (nvidia rtx 3060)
- Ogre2
- Running on real hardware, on a single-GPU machine
Description
-
I run simulation with fuel collection world + fuel X1 model robot. Initially, I added the diff drive plugin for keyboard control, and there were no problems with the robot's driving. Next, I used the image viewer for 2 cameras in the gazebo gui. Next, I subscribed to the topics from 2 cameras and imu using gz to track the frequency of the data. And after subscribing to the cameras, the total RTF started to decrease, so the set update rate of the cameras in the sdf file was actually lower (for example, instead of 20 fps, it was 5-6 fps). I tried other versions - fortress, harmonic, everything works the same way, with the only exception that fortress works faster under the same conditions.
-
The problem is that the cpu in all versions and when increasing the fps on the cameras in sdf file, does not load more than 30 percent (all threads).
-
The same thing happens with the GPU. The video card has 12 gb, but only 2.3-2.5 is used in any scenarios. I want to be able to increase the CPU/GPU utilization depending on the required power of the task, for example, by increasing the fps of sensors, or adding new algorithms. And for me, this does not happen automatically, but always remains at 30 percent CPU and about 30 percent GPU. I understand that i can remove shadows, reduce collision checking, that is, optimize the simulation itself, and it really works, but I don't want to optimize the simulation, I want to use more resources.
-
So how can this be done? I haven't found much information on this topic, except for setting up physics parameters. I'm attaching them below.
Have you tried setting real_time_factor to 0?
No, it doesn't help:(
Any chance you could share the file you are using so that we can use it as a benchmark. I think if we were to run the perf profiler with a debug symbols we may have a good chance of figuring out where the largest chunk of time is being spent and if theres anything we could do about it.
I researched same problem related to gpu-lidars and I believe problem in gz-transport. But also people suggesting to try orge instead of orge 2. https://github.com/gazebosim/ros_gz/issues/368 Topics working through network (even on a same host) and seems sending is a synchronous operation. If topic doesn't have any subscriber, it won't send any data, so it cost nothing. Maybe reimplemented gz-transport on Zenoh could work better. BTW, topic frequency is related to RTF, not wall time As a workaround you can try to reduce resolution of your cameras.