pixels
pixels copied to clipboard
Possibility of gpu-side pixel buffer
I am currently working on implementing cellular automata and similar things while learning rust and wgpu. Simulation on the cpu is already working for me and pixels was a big help for me since I didn't need to put any consideration whatsoever on the graphics aspect. So big thanks for this awesome crate! But the next step is doing the computation on the gpu to enable large simulations like these.
At the moment it's necessary to have separate buffers on the gpu for the computation, then mapping the buffers to be read by the cpu to update the pixel buffer of pixels which then copies the pixel buffer back to the gpu side texture on rendering.
Directly updating the texture is not possible either, since the .render-with
- method (and by extension the .render
- method) override the texture with the cpu-side pixel buffer.
It would be awesome if it were possible to specify a pixel buffer on the gpu which then gets used by pixels or request pixels itself to use a gpu-side pixel buffer and expose that to the user.
This would allow users to use pixels for what it does best, rendering pixel perfect graphics, while using the most appropriate buffer location for the specific application.
As an alternative pixels could also expose a method which renders the texture without first copying the pixel buffer to it. In that case it would be conveniant to also be able to query the necessary TextureView to be able to use the copy_buffer_to_texture
- method of the CommandEncoder of wgpu.
Thank you for your work!
This is something I need to look into more closely. I have another side project that I was not planning on using pixels
. But it seems like a similar use case, where I want a dumb pixel buffer/texture and not worry about how it gets to the display, but I also want the flexibility to render to the texture using the GPU.
The biggest issue, AFAIK, is that the texture usage is currently hardcoded: https://github.com/parasyte/pixels/blob/23da739650bca0d195fdf622d81426b2c2f9f657/src/builder.rs#L304
E.g. one could use the STORAGE
flag so a compute shader has access to write to it directly without copying through a separate buffer. (Although that buffer and copy may be necessary for reasons I do not understand?)
The other question about skipping or ignoring the write_texture
call seems like it should be architected as a separate Pixels
type; one with a CPU-accessible pixel buffer and one without. If they are not separate, I feel it would open the door to using the API incorrectly. Things like trying to access an inaccessible pixel buffer from the CPU-side could get ugly. A separate type wouldn't have those concerns because a GPU-only Pixels
type could just choose not to provide methods like get_frame()
.
I also think having a separate type would probably be the most straightforward way to expose the functionality to the user. The Pixels- type is advertised as a simple pixel buffer, but changing from internally using a Vec to a GPU-side Texture would fundamentally change how it works. I guess much of the functionality could be shared internally anyway.
Could this maybe be solved by making Pixels generic over the used storage (maybe internally using a enum)? Than the Implementation could be divided for the methods where it matters. The type using the CPU Storage could still be the default and the GPU Version would be constructible with the builder.
My current thought is using a trait to define the common functionality, and concrete structs for the implementation-specific parts. Pretty standard stuff, no frills.
Did you have a specific reason to use copy_buffer_to_texture
? If not, I think a trait would be superior to an enum in this case.
I certainly have the tendency to overcomplicate things 😅. I am hyped this seems to go forward!
Did you have a specific reason to use copy_buffer_to_texture? If not, I think a trait would be superior to an enum in this case. Not really. I could also directly set the color in the texture without any copying.
Hey! I had a similar use case where I just used the glium crate and compute shaders to do things. Only problem was, that I found GPU shader development to be very tedious due to the lack of proper profiling and debugging (I dont have NVIDIA GPU so that complicates things). I was wondering whether I should switch back to using pixels again and while checking out the repo for new things I found this. So has there been any updates on this?
Nothing new to report here. Even if this was addressed, it would not make the debugging experience with shaders any better. It would just bring that experience to pixels
.
Yep. Very true. Thanks for the response! Love your project ^^