iced
iced copied to clipboard
feat: offscreen rendering
Allows the application to render in an offscreen buffer instead of the onscreen framebuffer. This would allow the application to render and save the screen content into a file.
I'm not sure how to expose this feature to the application (expose the &[u8]
to the Application
trait), so I'm opening a draft PT to get some ideas.
Currently, this is gated under the offscreen
feature flag, which renders the first frame of the application to a framebuffer.png
file.
Some ideas
New method
Create a new method on the Application
trait, something like frame(&mut self, buffer: &[u8])
and call it on the event loop (application.frame(&pixels)
). The problem would be gating this through the feature, since we need offscreen
rendering enabled on the backend to be able to read the content of the buffer.
New trait
Almost the same as the new method idea, just instead we add it to a new OffscreenProgram
trait, or later on for example, a HeadlessApplication
(I'm working on enabling Iced to run headless, and this could be used in conjunction to enable server side rendering).
Subscription
Not sure if this is feasible, but we could allow the user to use a frames
subscription.
Note: While rendering to an offscreen buffer you won't see anything on the window.
Also: Technically, on OpenGL you can read the framebuffer directly, by using glReadPixels
, but on wgpu
the same isn't guaranteed, you'd have to request an extension for that.
Is this relevant to your commit? https://github.com/iced-rs/cookbook/tree/main/0.3/render_to_framebuffer_and_save
@13r0ck We use the same wgpu example to make rendering to a framebuffer possible.
@hecrj
I think we should be able to expose this in a somewhat isolated or decoupled way.
Specifically, I'm not convinced using a feature flag to switch between normal and offscreen rendering is the best API to satisfy the most common use case of this feature: taking a screenshot of (part of) the application.
I agree, the feature flag was just a simple way to get started.
I think maybe we could expose this as some kind of method in the new Window argument that will be provided to the update method in Application once #930 is merged soon!
I like the idea, but the problem here would be that rendering itself has nothing to do with the windowing, the "presentation" is what couples them. For example, in wgpu
you get a SwapChain
, which is just a collection of framebuffers managed so that you can easily present their content to the screen.
The problem becomes apparent if you were to create something like a headless mode (which I'm currently working on), there we don't have a Window
and we don't present to the screen, but we still want to render the content somewhere.
Still, even though we are not exactly using a proper Window
in headless mode, maybe exposing something like a Window
could be a good idea. We may need to resize or change the cursor position when in headless mode (e.g we could position the cursor over a button and check if it's style changes?). So, take_screenshot
could be one of those.
I like this idea a lot.
Still, even though we are not exactly using a proper
Window
in headless mode, maybe exposing something like aWindow
could be a good idea. We may need to resize or change the cursor position when in headless mode (e.g we could position the cursor over a button and check if it's style changes?). So,take_screenshot
could be one of those.
having window::*
functions as a common interface for rendered content would be great.
with #930 closed via #1019, I was thinking that an api for taking screenshots could be brought up in window
to look something like pub fn take_screenshot<Message>(screenshot_msg : impl Fn(Vec<u8>) -> Message) -> Command<Message>
.
I think adding a method to Application
(e.g. fn headless(&self) -> bool { false}
) would be a good way to control if we're in headless mode
Superseded by #1783.