SVG backend
I followed this weird idea I had to the point I can (mostly) render the demo window.
I'm hesitant about whether this is quite brilliant or plain stupid and would love some feedback on the idea.
It roughly works by :
textureCreatedumps an image file (tga converted to png with system command linemagickfor now)drawClippedTrianglesimply emit svg tags that are written to disk duringbackend.end:- triangles are
<polygon> - clipr is a
<clipPath> - texture is a
<pattern>tag with a link to the previously dumped image. Using<filter>tag for merging with vertex colors. - It just tracks each call to both functions calls above with a counter to give unique ids to elements and has upper bound to the number of frame rendered.
- triangles are
I started this as an idea to maybe generate back-end independent pictures, and as a learning exercise. The code is super straightforward, the tricky part was just to find the right svg equivalents for the graphics primitives (and for me to understand said primitives, but that is the learning part)
As (potentially) expected, it's very inefficient and produce quite big files quite fast, so I don't know about image generation. But I found that for debugging the rendering process it's nicer than expected. It's very simple to highlight elements. You can open the resulting svg in inkscape for close inspection, or in a web browser with auto-reload and fiddle with the source that is verbose but still quite readable. You get an immediate overview of the rendered textures for each frame in the file browser....
Here is a few nice visuals :
| Inspect rendered triangles (color dots indicate order, gold=t1, mag=t2, cyan=t3) | Inspect uv coordinates on texture |
|---|---|
| Highlight non ccw triangles (here mostly empty ones) | Rendering process overview as files |
|---|---|
But yeah, it probably makes sense only for testing a few small elements. The LayoutText screenshot above is a 1.2Mo svg file that makes my laptop suffer already. If we were to integrate this backend, we would maybe provide a exemples/svg-mini.zig or similar that just renders a button and by default prevent other examples to build with this backend.
If anyone wants to test, the code is in https://github.com/david-vanderson/dvui/tree/svg_backend and zig build svg-mini should get you started. (but you will need imagemagick installed on your system)
That is a really cool approach to cross platform rendering!
I've got some ideas about having to store the frames as files. You could keep everything in memory, storing dvui.Textures in an HashMap/Arraylist and just appending the texture content the end of the "rendered" svg, linking to it with <use>. textureCreate could use data urls in an <image> to avoid having to create files. The only limitation of this would be textureRead that couldn't get real pixels from the in-memory svgs, but I think its rare to read textures for any reason other than saving an image, which wouldn't be possible/supported here. With a way to turn off rendering for the "setup frames", since it's not that fast, it could replace the testing backend with new svg snapshot capabilities!
I'm thinking for the testing use case just exporting the svg is probably the best option so no need to depend on imagemagick. I think users could view the svgs or transform the image themself.
This is awesome! So many interesting possibilities. It'll be a bit before I can dig into it, but very excited!
@david-vanderson ok cool. If you find this fun I will keep investigate. There is no rush, since it's only a backend it's easy to merge anytime later.
There is only one line that needs to be called earlier : calling backend.end() before freeing the memory for the frame otherwise I end up using the reference to the textures after free.
What I would propose is just to merge that one change in main, and change the doc-comment of backend.end() to something simple like
/// Called during `dvui.Window.end` before freeing any memory for the current frame.
The rest we can explore on the dedicated branch as long as we see fit.
Sounds reasonable ?
@RedPhoenixQ : The "store textures during frame and dump into data url thingy" is not really a problem. That would work for sure. But you would typically embedded a base64 encoded .png file, so you still need to produce the png bit stream. I used .tga as a file because it's almost a raw format (easy to dump), and imagemagick because it does transform my hand-made tga file in proper png file (and I had it at hand). The issue with the tga format is that it's not recognized as an "embeddable" svg raster. At least not in the external link form (tested that), and I doubt it would be different for a base64 embedded version (did not tested that) because ultimately it's the same for the svg reader. (I tested my svgs with chromium, inkscape and eye-of-gnome, rendering is not always consistent when you are doing sneaky stuff)
Maybe there is some MIME type for raw data that is recognize decently by mainstream svg readers, but anyway doing so would be super wasteful in storage, and directly emitting a png bit stream from each texture ... well, how hard could it be ? ;-)(no, but I already saw some zig implementation around, I expect this to be more annoying than difficult to do)
Then, when we have the png bit stream, totally with you, embedding is the way to go.
That being said I did not yet dig into textureRead renderTarget so I might be missing something. Also please correct me if you know stuff about image formats I don't.
Note that the main issue is not really the texture, but the fact that:
- You have do define a different
<pattern>/<filter>for each triangle. I did not find any way to manipulate the texture's position/color directly at usage location. - There is many
<polygon>tags.
So it's bloated because svg is very storage inefficient. But I mean, the calculator example frame is in the 200kB and the styling one 450kB. This is svg only, not accounting textures. Which are resolution dependent contrary to the svg, but on 800x600 screen size the texture for the text for both examples mentioned here is below 3kB in png.
Of course, we could optimize, say by emitting one <rect> instead of two <polygon> for text typically. But this is not trivial and we loose the nice property of being a 1:1 rendering of dvui, as the backend would make some choices/calculation that can introduce bug, rounding, small displacement, etc ...
Maybe there is another way. But for rendering small gui elements, for docs or unit-test purposes, then I think it would be reasonable. Regarding speed I don't know how it would compare with sdl's screenshot.
That would work for sure. But you would typically embedded a base64 encoded .png file, so you still need to produce the png bit stream. I used
.tgaas a file because it's almost a raw format (easy to dump), andimagemagickbecause it does transform my hand-made tga file in proper png file (and I had it at hand).
Good point. You can use dvui.c.stbi_write_png_to_mem to get a png stream directly from the pixels array to store the raster images without any extra dependencies.
Note that the main issue is not really the texture, but the fact that:
- You have do define a different
<pattern>/<filter>for each triangle. I did not find any way to manipulate the texture's position/color directly at usage location.- There is many
<polygon>tags.
This is a good proof of concept as it is, even with very large file sizes.
In https://github.com/david-vanderson/dvui/issues/98 it was proposed for the backends to get more context for what's being rendered by having separate backend functions for text, image rendering and maybe basic shapes like rects with rounded corners. Having access to this within the backends would make it so the most common use of texture, to display an image in full, could use the <image> tag directly in this backend. The other backends could transform everything to triangles for now like the PR suggested. Basically the backend would be responsible for creating the shapes, either by manually making tringles or in this case using the tags and attributes that are available.
Giving backends more control does mean that there is a higher risk of visual differences between them, but a much higher ceiling for performance for web and possibly the other backends as well
Sounds reasonable ?
Yes - please merge that one line change - thanks!
Good point. You can use
dvui.c.stbi_write_png_to_memto get a png stream directly from the pixels array to store the raster images without any extra dependencies.
There is dvui.pngEncode() which wraps stbi_write_png_to_mem and adds a resolution block to the png image. That accounts for different dpi screens. I'm not sure how that interacts with svg, but probably you want it.
Uuh. Didn't realise we already have png export. This will be an easy one then, thanks.
Yeah, I had #98 in mind as well. That would make it easier and more efficient indeed. And nothing prevents us to do both. svg_intermediate for decent perf and svg_triangle for low level debug.
The only risk I see is having too many exiting ideas and not executing them well 😅
This is already great - I pushed a commit to stop sending empty triangles to the backends for the space codepoint. Thanks!
Excellent. Currently re-working the thing for better performance (and toying around tbh). Will share here if there is relevant updates.
FYI :
I did some progresses and rebased on main my svg backend.
I was able to embed the texture in base64, and quite compress the file size by writing better svg, re-using common attribute in <g> and de-duplicating filters.
Demo window is still ~3Mo, but simpler layout like the "styling" example is more in the 250kB range. The speed is not great but much better than before with ~300[ms] in DebugBuild for the demo window.
(yes, this is an svg you can download an test if your svg viewer cries or not 😄 )
(edit : for some reason it doesn't render well in github when you click on the image, but it does work fine in the preview and if you download it, at least for me)
I have a lot of compile-time options for debug (maybe a bit too much to be honest, might be tricky to maintain) but it's fun to play with. You can do so with zig build svg-demo on the relevant branch.
I still have one bug with texture rendering in intermediate frames, but I believe this is due to dvui destroying textures and still passing them to drawClippedTriangle afterwise. I will open a dedicated issue for this.
Nice progress!
I tested it and get some weird padding on the demo window frame. Haven't looked into what could cause it, but it seems that something isn't consistent between platforms yet (I'm on Windows it that matters). The frame number says "no 1" which also isn't the same.
@RedPhoenixQ : thanks for testing. I believe this is somehow an animation.
Try generating a third frame for the demo. Apparently I did not commit this, but I should have to make sure it settle.
I don't really understand how the animations works yet, so not sure about it but interestingly I get the exact same padding on the first frame.
So one point for reproductability at least 😅