rust-gpu
rust-gpu copied to clipboard
Support for NonReadable, NonWritable
I couldn't find any existing issues tracking adding support for specifying these decorations (or other decorations). Would it make sense for these to be exposed as spirv
attributes, or as another const
parameter of Image
, for example? With the latter approach, correct usage of variables with these decorations can be enforced, but it might be complicated slightly by the decorations in SPIR-V being for the variable, not the type itself. I'm also not sure how that approach would work with buffers.
It also does not look to be possible currently to work around this with inline assembly, at least for input arguments, as the global OpVariable
identifiers do not appear to be visible from asm!{}
.
Correct me if I'm wrong, but given the impression that rust-gpu does not aim to simply wrap SPIRV and provide a rusty backend, I'd highly prefer the respective non_writeable
/ non_readlable
annotation to be automatically applied depending on argument mutability. Allowing both, the SPIRV annotation and mutability is redundant and error prone.
@apriori The trouble with that is that argument mutability has only two states: readonly, and read-write. There's no state corresponding to writeonly (NonReadable), and so there needs to be some form of extra annotation for at least that case.
Correct me if I'm wrong, but given the impression that rust-gpu does not aim to simply wrap SPIRV and provide a rusty backend, I'd highly prefer the respective
non_writeable
/non_readlable
annotation to be automatically applied depending on argument mutability. Allowing both, the SPIRV annotation and mutability is redundant and error prone.
There is no reason not to mark &T as NonWritable. And spirv-tools can validate these attributes. They could be reflected based on usage, or the user applies an attribute themselves, and it is checked. shaderc validates both of these attributes so it's natural to expect rust-gpu would do the same. While NonReadable doesn't map to Rust's type system, it is useful for a runtime to optimize barriers (since consecutive writes without reads don't need a barrier). It could also ensure that uninitialized buffers are not read. This is all possible externally but it would be nicer to do this at compile time than potentially at runtime.
Has there been any movement on this, or potential for some sort of interim workaround?
I've been experimenting with porting bevy's PBR shaders from WGSL to Rust, and - barring any potential complications with RuntimeArray<T>
inside structs - being able to bind the engine's read-only storage buffers on the shader side appears to be the last blocker to feature-parity.
There is no reason not to mark
&T
asNonWritable
.
Took a stab at that, let me know if all fits expectations:
- https://github.com/EmbarkStudios/rust-gpu/pull/1011
It could also ensure that uninitialized buffers are not read.
MaybeUninit
is the standard way to do that in Rust, and it should mostly work now (since #1006).
However, &mut [MaybeUninit<T>]
still allows reading elements that have been written, and generally speaking there's nothing you can do to the pointee type of &mut
that would completely disallow reads, you would have to wrap the &mut
and provide custom APIs that cannot accidentally perform reads.
The only viable idea I've heard, for how to actually correctly apply NonReadable
, came from @Exsolutus (IIRC), and it's to infer it in the same analysis that allows qptr
to regenerate types - for more on qptr
see:
- https://github.com/EmbarkStudios/spirt/pull/24
(that approach would also remove the need for #1011, and/or could be used in tandem by checking decorations if they already exist: a qptr.load
from a NonReadable
interface variable would clearly be invalid)