Capture frames from the canvas
I'm wondering what's the best way to basically create a video of the canvas. The offscreen canvas renders to a texture, and we could save that texture as video frames. But what if we're not using the offscreen canvas?
It seem to change with the canvas used, for example the wpgu offscreen canvas will return a memory view of the current canvas with self._canvas.draw() which is how I hacked a snapshot method for wgpu-shadertoy originally.
The JupyterCanvas had self._canvas.snapshot().data and I didn't dig deep enough for the other canvases. A common method for all canvases would be beneficial
For video, my idea was to essentially use the snapshot method with precomputed timesteps, which would easy allow users to pick a start, duration and framerate. Would also not be limited to real time. And then handle encoding externally - likely ffmpeg
I only look at pygfx for reference, so they might be something more useful I am not aware of.
Yup the jupyter canvas method you mentioned is what we have implemented in fastplotlib, I'll do some digging to figure out how to do this with Qt and glfw.
It is easy to obtain a real-time screenshot of the scene by reading the "ColorTexture" of the "RenderTarget", and off-screen rendering is not necessarily required.
I noticed that the WgpuRenderer class already has a snapshot() method. Would this method solve your issue?
I noticed that the
WgpuRendererclass already has asnapshot()method. Would this method solve your issue?
Partially! What's the best way to capture frames to make a video? Right now we basically run it in the main animation loop, i.e. the function we set as canvas.draw_frame() using canvas.request_draw(draw_function=animate), this basically looks something like this. Is there a better way, perhaps with async to poll the renderer?
# multiprocessing queue
q = Queue()
def animation():
if time_elapsed > (1/30) # some timer used to capture frames at intervals so it doesn't run on every animation call because then it blocks
frame = renderer.snapshot()
q.put(frame)
canvas.request_draw(animation)
What's the best way to capture frames to make a video?
I don't have much practical experience with this. 😅
However, I think getting video frames in the rendering loop may not guarantee an absolutely fixed interval, as it depends on the rendering time of each frame.
If you need to obtain video frames at an absolutely fixed frame rate, you may need to use multi-threading (another thread to fetch from the WgpuRenderer object at a fixed frame rate), but maybe the internal objects of pygfx are not thread-safe. Therefore, an alternative approach is to cache the latest rendered frame in the main rendering loop, and have another thread read this cached frame at a fixed frame rate to generate a sequence of video frames.
Maybe something like this:
# multiprocessing queue
q = Queue()
latest_frame = None
def animation():
renderer.render(...)
latest_frame = renderer.snapshot()
canvas.request_draw()
def capture():
while True:
q.put(latest_frame)
time.sleep(1/30)
t = threading.Thread(target=capture)
canvas.request_draw(animation)
t.start()
run()
The renderer.snapshot() works, but note that it samples from the internal texture, so the result may be different then what's shown on screen. If anything, the resolution will be higher.
I think it makes sense to have more sophisticated snapshot functionality. I added a note in #492, because it relates to viewports too. We can leave this issue open to explicitly track this feature.