iced
iced copied to clipboard
`InfiniteList` widget
I am writing an app that loads more than 400 PNGs into a scroll view.
But, it seems that VRAM is out of memory because too many images are being loaded.
I want to unload images outside the rendering range of the scroll view. Is there a way to know the display status of children elements in scroll view?
We are currently not performing any culling of rendering primitives. We haven't focused much on performance or memory footprint yet. However, it may be worth it to add a layer boundary check here:
https://github.com/hecrj/iced/blob/e6aa25a1032f583ced7f6f02806991ffb4cde140/wgpu/src/renderer.rs#L246-L252
It should be a matter of checking rectangle intersection. Once images are culled, the renderer cache will automatically deallocate memory of images that fall out of bounds. I will see if I can add that in a bit!
However, the deallocation strategy is very simplistic and I have the feeling this will not be enough for a smooth scrolling experience, as the renderer currently loads images using blocking I/O. We should fix that eventually.
There is also the fact that we are issuing a draw call per image, instead of using a texture atlas to batch the work (there are some initial efforts in #154).
It is wonderful!
Thanks for supporting this particular use case in this early project.
Maybe that logic could be handled in the scroll view itself. Make the view somehow instantiate only those widgets that are actually being shown. I know Qt does it like that.
@Maldela Yes, there are different levels where we can perform optimizations.
The renderer one I mentioned is widget-agnostic and culls primitives directly. Therefore, it's more general. It should benefit custom widgets too.
Checking boundaries of the children in the Scrollable
implementation to avoid generating unnecessary primitives altogether would be the next step. However, we cannot rely only on that. A simple, efficient implementation will only be able to check the direct children of the Scrollable
and, given widgets can be nested infinitely, there could be many out-of-bounds primitives after culling (think of a Scrollable
with a huge Column
as a single child).
As an alternative, we could provide Widget::draw
with a clip region. Scrollable
would create a new region and feed it to its children. This way, culling could be performed efficiently at any nesting level.
In any case, the renderer optimization should take care of the extreme cases for now.
I have a Scrollable
with >1k Button / Text
elements. CPU usage is very high when cursor is moving inside the app, I assume from from having to push
each element on every redraw? Worth noting I have subscriptions enabled to capture keyboard events. If I filter down to a reasonable number of elements, everything is snappy and CPU usage is very low.
I'd think I wouldn't want to push 1k records in the first place, only however many elements are "in view". Any chance of exposing helper methods to calculate how many elements are "in view" and what the current offset is? That way I could just push
what's needed on each redraw.
@tarkah For this kind of use case, there are plans to create an alternative to Scrollable
with a retained/lazy API (i.e. owning its contents/producing only visible elements). We have an ongoing discussion in the Zulip server about building this InfiniteList
widget.
EDIT: That application looks cool! :smile:
Thanks! Perfect, I'll follow progress there.
@hecrj Would that InfiniteList
widget be similar to table widgets in libraries like Qt or UIKit for iOS where it puts the responsibility of dequeuing renderable rows onto the implementor? It's a pretty common pattern that I've seen in UI frameworks. Basically, there are a set of closures that one could provide to the table view to keep track of everything. Whenever the set of items is changed, the user calculates the total height of the table, and sets the scrollable content height and width if need-be, then a item_height
closure is called once per item which should return the current height for that item at that index. This gives the table view an idea of where each child row's y-position starts and ends while allowing each cell to have different heights if need be. This also allows the table to infer which row bounds are in view at any given time since it'll always know its own scroll offset and dimensions. Then whenever the items change, the view is resized, or scrolled a dequeuing
closure is called once per item that the table view expects to be in-bounds. This closure's job is to match which type of cell to render, and assign data to it, so that it's updated. Then the view draws all of those pre-instantiated elements at that starting offset which is the first element that's just off-screen.
From a 3D graphics perspective, you won't ever need to resize your vertex buffers except for when the table view grows. Eventually, it'll never grow anymore because that element will typically be within the window's view except for crazy situations when tables need to be nested inside of tables... That's probably on them to figure out though lol. I'm not familiar with Vulkan/wgpu, but I'm guessing GLSL's used under-the-hood all the same, so you should be able to pass all viewable elements' data as a UBO, and have a vertex attribute to associate each quad to the index of the table cell struct in the UBO.
Sorry for the crazy wordiness. This looks like an incredibly exciting project. The Rust community needs more UI crates like this :)