Making buffer mapping part of the public API?
Also see:
- https://github.com/gpuweb/gpuweb/issues/605
- https://github.com/gpuweb/gpuweb/issues/897
The WebGPU API for synchronizing data between a GPUBuffer and the CPU makes use of "mapping". The user requests mapped data, and then read from / write to this memory via getMappedRange. When done, unmap should be called.
I find it challenging to find a Pythonic API for this. I think that any solution that we implement should make it impossible for the user to access the memory after it is unmapped. (I mean impossible, unless the users is deliberately using ffi or something to do so.)
One can invalidate a memorview object by calling its release() method. But if another memoryview or numpy array has been mapped to the same memory, these continue to work. Similarly, e.g. resizing the underlying data array is locked if there are views mapped to it.
Some options:
Stick to read_data and write_data
What we have now.
Chunked writing / reading
I think this would cover most use-cases, without the need to expose the mapping stuff:
def write_chunks(sequence, offset=0, size=0): # With sequence an iterable (or even a generator) providing tuples (offset, data) ...
Mapping, but via a custom class so that we can restrict access
class BufferMapping:
def __init__(self, mem):
self._never_touch_this_mem = mem # a memoryview
self._ismapped = True # The buffer will set this to False when it's unmapped
def cast(self, format, shape=None):
if not self._ismapped:
raise RuntimeError("Cannot use a buffer mapping after it's unmapped.")
self._never_touch_this_mem = self._never_touch_this_mem.cast(format, shape)
return self
def __getitem__(self, index):
if not self._ismapped:
raise RuntimeError("Cannot use a buffer mapping after it's unmapped.")
res = self._never_touch_this_mem.__getitem__(index)
if isinstance(res, memoryview):
raise IndexError("Cannot get a sub-view") # or also wrap in a BufferMapping?
return res
def __setitem__(self, index, value):
if not self._ismapped:
raise RuntimeError("Cannot use a buffer mapping after it's unmapped.")
self._never_touch_this_mem.__setitem__(index, value)
def to_memoryview(self):
# Make a copy
new_obj = (ctypes.c_uint8 * self._never_touch_this_mem.nbytes)()
new_mem = memoryview(new_obj)
new_mem[:] = self._never_touch_this_mem
return new_mem
The thing is ... when would you use this? To map the data and then setting data elements one by one? That would be slow because of the overhead that we introduce. In batches then? Well, in that case you could call write_data(subdata, offset) a few times ...
The use-cases where a mapped API has an advantage (in Python) seem flaky, and the API is much more complex. Therefore we don't currently expose this API in wgpu-py.
However ... I could miss a use-case. And I could miss a possibly elegant solution.
In the current API (per #156) we have buffer.map_read() and buffer.map_write() for somewhat lower-level io, and queue.read_buffer, write_buffer, read_texture and write_texture for more convenience (that use a temporary buffer behind the scenes).