pyrender
pyrender copied to clipboard
GPU acceleration seemingly not used in headless rendering with EGL
Hi,
I am running pyrender in a Docker container with nvidia runtime. (nvidia/cudagl:10.2-runtime-ubuntu18.04)
I can check that the GPU passthrough works by seeing the output of nvidia-smi
.
Pyrender works fine with PYOPENGL_PLATFORM=egl, and I can produce the expected image. I know that Pyrender is at least checking the presence of the GPU because if I fail to enable the GPU in the container it exits with an error.
So - functionally it seems all fine.
However: I don't think that the GPU is actually being used. When I run my program (which is producing a video using pyrender) I can see the CPU is pegged at 100% - while I would expect that the execution should be GPU bound. Moreover, the performance that I see is compatible with the speed of software rendering.
I did a deep dive into the code, trying to isolate where is the problem. I use flags = FLAT. I disabled the reading of the depth buffer (it took 40ms per frame), optimized to some extent the caching of the program
, etc., until I had to realize that the bottleneck is the drawing (glDrawElementsInstanced
). If I understand correctly, calling this should not be a CPU-bound operation, as the GPU will do most of the work.
So I'm thinking that somehow PyRender is not actually using the acceleration.
How can I debug this issue?
One question that I have is the following: where can I check that the renderer is actually a hardware rendererer? in platforms/
I didn't see an obvious place where to get the name of the renderer.
I am using Python packages:
PyOpenGL (3.1.0)
PyOpenGL-accelerate (3.1.0)
pyrender (0.1.43, /project/pyrender)
Docker base image nvidia/cudagl:10.2-runtime-ubuntu18.04
plus APT packages libglfw3-dev libgles2-mesa-dev freeglut3-dev
.
nvidia-smi
from the container:
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 440.100 Driver Version: 440.100 CUDA Version: 10.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 208... Off | 00000000:65:00.0 On | N/A |
| 35% 35C P8 21W / 260W | 536MiB / 11018MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
Hi, I tested my pyrender with PYOPENGL_PLATFORM=egl in docker but I got an error "ValueError: Invalid device ID(0) ". Have you meet this problem?
Same issue on my system - CPU usage is close to 100%, GPU usage close to 0
@AndreaCensi Can you share your dockerfile? I have been having issues just building a docker image such that egl will work. I've been getting all kinds of weird errors. Thanks!
@KalraA here is what I used:
FROM nvidia/cudagl:11.0-runtime-ubuntu18.04
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,graphics,utility,video
ENV LANG C.UTF-8
ENV LC_ALL C.UTF-8
RUN apt-get update && apt-get install -qq \
curl \
ca-certificates \
sudo \
git \
curl \
bzip2 \
libx11-6 \
autoconf \
automake \
build-essential \
cmake \
wget \
libjpeg-dev \
libpng-dev
RUN apt-get install -qq python3 python3-dev python3-pip
RUN ln -s /usr/bin/pip3 /usr/bin/pip
RUN ln -s /usr/bin/python3 /usr/bin/python
RUN pip install -U pip
RUN pip install scikit-build opencv-python matplotlib jupyter notebook scikit-learn
RUN apt-get install -qq libglfw3-dev libgles2-mesa-dev freeglut3-dev
RUN pip install PyOpenGL PyOpenGL_accelerate pyrender
I also tried without NVIDIA's base image by installing libglvnd but I couldn't get that to work yet.
Hi, I have the similar issue. Have you found any solution?
I also tried without NVIDIA's base image by installing libglvnd but I couldn't get that to work yet.
Managed to get this working on Azure ML (should transfer to any image using CUDA devel as a base) using the following:
FROM mcr.microsoft.com/azureml/openmpi4.1.0-cuda11.0.3-cudnn8-ubuntu18.04
# CUDAGL (glvnd) for pyrender
ENV LIBGLVND_VERSION v1.2.0
# runtime from https://gitlab.com/nvidia/container-images/opengl/-/blob/ubuntu18.04/glvnd/runtime/Dockerfile
RUN echo '{"file_format_version": "1.0.0", "ICD": {"library_path": "libEGL_nvidia.so.0"}}' >> /usr/share/glvnd/egl_vendor.d/10_nvidia.json
RUN apt-get update && apt-get install -y --no-install-recommends \
libglvnd0 \
libgl1 \
libglx0 \
libegl1\
libgles2 && \
rm -rf /var/lib/apt/lists/*
# devel from https://gitlab.com/nvidia/container-images/opengl/-/blob/ubuntu18.04/glvnd/devel/Dockerfile
RUN apt-get update && apt-get install -y --no-install-recommends \
pkg-config \
libglvnd-dev \
libgl1-mesa-dev \
libegl1-mesa-dev \
libgles2-mesa-dev && \
rm -rf /var/lib/apt/lists/*
# Extra stuff needed for pyrender
ENV NVIDIA_DRIVER_CAPABILITIES compute,graphics,utility,video
RUN apt-get update && apt-get install -y freeglut3-dev
For images based on nvidia/cudagl
I think the key additions to get pyrender working are:
ENV NVIDIA_DRIVER_CAPABILITIES compute,graphics,utility,video
RUN apt-get update && apt-get install -y freeglut3-dev
without freeglut3-dev
I get the error:
import pyrender
File "/azureml-envs/azureml_883050db82ce59b0327e2f2458825e3e/lib/python3.8/site-packages/pyrender/__init__.py", line 12, in <module>
from .viewer import Viewer
File "/azureml-envs/azureml_883050db82ce59b0327e2f2458825e3e/lib/python3.8/site-packages/pyrender/viewer.py", line 38, in <module>
class Viewer(pyglet.window.Window):
File "/azureml-envs/azureml_883050db82ce59b0327e2f2458825e3e/lib/python3.8/site-packages/pyglet/__init__.py", line 340, in __getattr__
__import__(import_name)
File "/azureml-envs/azureml_883050db82ce59b0327e2f2458825e3e/lib/python3.8/site-packages/pyglet/window/__init__.py", line 1918, in <module>
gl._create_shadow_window()
File "/azureml-envs/azureml_883050db82ce59b0327e2f2458825e3e/lib/python3.8/site-packages/pyglet/__init__.py", line 340, in __getattr__
__import__(import_name)
File "/azureml-envs/azureml_883050db82ce59b0327e2f2458825e3e/lib/python3.8/site-packages/pyglet/gl/__init__.py", line 95, in <module>
from pyglet.gl.gl import *
File "/azureml-envs/azureml_883050db82ce59b0327e2f2458825e3e/lib/python3.8/site-packages/pyglet/gl/gl.py", line 45, in <module>
from pyglet.gl.lib import link_GL as _link_function
File "/azureml-envs/azureml_883050db82ce59b0327e2f2458825e3e/lib/python3.8/site-packages/pyglet/gl/lib.py", line 149, in <module>
from pyglet.gl.lib_glx import link_GL, link_GLU, link_GLX
File "/azureml-envs/azureml_883050db82ce59b0327e2f2458825e3e/lib/python3.8/site-packages/pyglet/gl/lib_glx.py", line 46, in <module>
glu_lib = pyglet.lib.load_library('GLU')
File "/azureml-envs/azureml_883050db82ce59b0327e2f2458825e3e/lib/python3.8/site-packages/pyglet/lib.py", line 164, in load_library
raise ImportError('Library "%s" not found.' % names[0])
ImportError: Library "GLU" not found.
and without ENV NVIDIA_DRIVER_CAPABILITIES compute,graphics,utility,video
I get the error:
File "/azureml-envs/azureml_c4a933d7723a8f146a74019621ae4ec8/lib/python3.8/site-packages/pyrender/offscreen.py", line 31, in __init__
self._create()
File "/azureml-envs/azureml_c4a933d7723a8f146a74019621ae4ec8/lib/python3.8/site-packages/pyrender/offscreen.py", line 149, in _create
self._platform.init_context()
File "/azureml-envs/azureml_c4a933d7723a8f146a74019621ae4ec8/lib/python3.8/site-packages/pyrender/platforms/egl.py", line 177, in init_context
assert eglInitialize(self._egl_display, major, minor)
File "/azureml-envs/azureml_c4a933d7723a8f146a74019621ae4ec8/lib/python3.8/site-packages/OpenGL/platform/baseplatform.py", line 402, in __call__
return self( *args, **named )
File "src/errorchecker.pyx", line 58, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError
OpenGL.error.GLError: GLError(
err = 12289,
baseOperation = eglInitialize,
cArguments = (
<OpenGL._opaque.EGLDisplay_pointer object at 0x7ff2d63ed440>,
c_long(0),
c_long(0),
),
result = 0
)
The same error occurs without RUN echo '{"file_format_version": "1.0.0", "ICD": {"library_path": "libEGL_nvidia.so.0"}}' >> /usr/share/glvnd/egl_vendor.d/10_nvidia.json
Hello @SLTK1 were you able to fix the ValueError?