FileHelpers: Add FileDescriptor.read(filling buffer:)
Because read(into:) and write(into:) return the number of bytes read or written which may be smaller than desired, users will often want to call these functions in a loop until they have read or written all the bytes they require. Such loops require keeping track of the index and will be repeated toil in each application.
Swift System already provides an extensions writeAll(_:) and writeAll(toAbsoluteOffset:_:) which operates on a sequence of bytes and writes all the bytes in the sequence.
This patch adds an analogous helper function for reading, read(filling buffer:) which takes an UnsafeMutableRawBufferPointer and reads until the buffer is full, or until EOF is reached.
Yep, sadly we cannot remove the existing, throwing entry point.
We could introduce a new non-throwing one while keeping the original around, but unfortunately it would need to have a more recent availability declaration, and we could only call it after an if #available check. Still, if the throwing variant is causing performance issues (does it?), then it may be a good idea to do that.
Makes sense, well this was in favour of me adding readAll(into:) and readAll(fromAbsoluteOffset:into:) to mirror the existing writeAll(_:) and writeAll(toAbsoluteOffset:_:) helpers.
Maybe I'll repurpose this PR for that since it provides a real motivation for the non-throwing, internal _read.
We couldn't do readAll at the time. With opaque return types with associated type bounds we should soon be able to do a read all that returns some Collection<UInt8>. When we get RawArray or some universal appendable untyped view, we can do readAll(into:).
I wonder if there's anything similar to readAll(into:) we could do that would make it easier for other types like ByteBuffer to wrap
Do I assume that it's not appealing to have a wrapper that still deals with UnsafeMutableRawBufferPointer, one that just handles the while loop over the underlying read(into:)?
Do I assume that it's not appealing to have a wrapper that still deals with UnsafeMutableRawBufferPointer, one that just handles the while loop over the underlying read(into:)?
read(into:) wouldn't be able to resize the UMRBP. We could have something that attempts to fill it and returns how much it filled and whether there might be more input left.
read(into:)wouldn't be able to resize the UMRBP. We could have something that attempts to fill it and returns how much it filled and whether there might be more input left.
Ah, right, I meant one that would read until the UMRBP was filled. Clearly readAll is the wrong name here. It's fill this buffer with read bytes, please, I don't know how it should be spelled exactly, maybe read(filling buffer:)? It's an analog to the fact that we have a helper on the write path which does the while-loop-index dance, but nothing on the read path.
read(filling:) makes sense to me if it will avoid the loop dance for use cases that want to read up to a fixed length from a file. E.g. they're doing a segmented or buffering scheme anyways.
@lorentey, thoughts?
@milseman wrote:
read(filling:)makes sense to me if it will avoid the loop dance for use cases that want to read up to a fixed length from a file. E.g. they're doing a segmented or buffering scheme anyways.@lorentey, thoughts?
OK, well I repurposed this PR for read(filling buffer:) and made sure to reinstate the (not actually) throwing internal _read.
@swift-ci please test
I'm fine with the read(filling:) name! (This operation is the reading equivalent to writeAll, so I would have been fine with readAll(_:) as well, for symmetry.)
I don't think this package needs to provide an operation that reads the entire file into memory returning a freshly allocated, dynamically sized, owning buffer. (That would be a new operation, independent of writeAll.)
I've used libraries where the fill-this-buffer-by-reading-bytes operation would signal an error if it runs out of bytes and I used ones where it would return a partial buffer. Generally I found the erroring variant more convenient, as in the other case I'd usually just end up having to signal an error myself. (This is supposing that the regular partial read is available as a separate operation, like is the case with System.) So from a usability standpoint I'd prefer this threw an error on EOF.
Does that mean the use site needs to first call stat to see how big the file is in order to not over-allocate the buffer? Also, would it back out the partial fill or does that still happen as an effect?
callers would need to manually check for it and in the partial case they will typically end up throwing anyway.
What's your use case like? Is the idea that someone gave you a hard-request for exactly some number of bytes? Would you want to write the bytes you could and throw?
@simonjbeaumont, what's the immediate intended use of this? read will get every byte requested from a normal file, but I'm not as clear on what happens with fifos or sockets.
@milseman Is there a good reason writeAll is implemented using an opaque function?
Opaque is the default, and it's not as clear to me the benefits of making it not-opaque just to inline the loop into user code. I'm not sure if it would improve non-blocking I/O, though.
Does that mean the use site needs to first call stat to see how big the file is in order to not over-allocate the buffer? Also, would it back out the partial fill or does that still happen as an effect?
No! The use site will typically want to read exactly $n$ bytes, for example, to deserialize a four-byte integer value of some endianness. If there aren't enough bytes, then the file is truncated/malformed and the caller will want to report an error.
It's not the best use of system resources to use unbuffered I/O like this (i.e., one syscall per primitive value read or written), but given that we provide writeAll, it seems silly not to also have the equivalent operation in the opposite direction.
What's your use case like? Is the idea that someone gave you a hard-request for exactly some number of bytes? Would you want to write the bytes you could and throw?
If it's okay not to fill up the buffer, then the regular read is the right operation for the job.
Opaque is the default, and it's not as clear to me the benefits of making it not-opaque just to inline the loop into user code. I'm not sure if it would improve non-blocking I/O, though.
I'm asking because these utility functions can be expressed on top of existing public API, and moving them behind a resilience boundary forces them to come with more recent availability on Darwin.
These would probably be great use cases for the new @_backDeploy attribute; meanwhile, @_alwaysEmitIntoClient would probably be fine for them. I don't see a need to ever touch these implementations in the future.
given that we provide writeAll, it seems silly not to also have the equivalent operation in the opposite direction.
Yes, but read goes in the opposite direction and we do not have a universal appendable untyped storage object. We could have a readAll that returns a RawArray when that's a thing and we could have one that takes something appendable as inout. writeAll doesn't have this problem because it reads from its parameter.
If it's okay not to fill up the buffer, then the regular read is the right operation for the job.
Yes, for normal files that is guaranteed. I wasn't clear on what happens if it's a fifo. Would it just block and wait for more bytes?
given that we provide writeAll, it seems silly not to also have the equivalent operation in the opposite direction.
Yes, but read goes in the opposite direction and we do not have a universal appendable untyped storage object. We could have a readAll that returns a RawArray when that's a thing and we could have one that takes something appendable as inout. writeAll doesn't have this problem because it reads from its parameter.
The equivalent operation to writeAll isn't an operation that reads every remaining byte in a file descriptor into a dynamically sized buffer. I do not think System needs to provide such an operation.
writeAll writes exactly $n$ bytes to a file from a provided buffer, where $n$ is a value that is known at the time the call is made.
The read-side equivalent to that is to read exactly $n$ bytes from a file into a provided buffer, where $n$ is a value that is known at the time the call is made.
If it's okay not to fill up the buffer, then the regular read is the right operation for the job.
Yes, for normal files that is guaranteed. I wasn't clear on what happens if it's a fifo. Would it just block and wait for more bytes?
I don't understand this question. What is guaranteed?
The exactly-fill-this-fixed-size-buffers-with-bytes-or-throw-an-error operation would call read in a loop until either (1) the buffer has been successfully filled, or (2) the EOF condition is reached or (3) read reports an error. I expect the operation would return normally in case (1), and throw an error in cases (2) and (3).