Add builtin functions to consume a limited amout of input
Currently, the builtin function library lacks a low-level read sort of function, analogous to Python's sys.stdin.readline() (or, I guess, input()) or Go's bufio.NewReader(os.Stdin).ReadString('\n') kind of call.
Here's an example case where I might want to use that behavior (meanwhile, I can't seem to think of any ways to accomplish this without a read function, though that may be a limitation of my imagination):
echo "Hi, what's your name?"
name = (read)
echo "Okay, what's your age?"
age = (read)
echo "Ha ha ha, hey "$name", you're an ancient "$age"-year old fossil!"
It seems to me, after skimming through the builtin function documentation, that there is no such builtin read function yet. If I am right, I think having something like this would be immensely helpful for writing processing scripts in Elvish.
A more realistic use-case: I load a CSV from some online API, and I want to convert it into JSON. The API's CSV output always has a header row, which I want to process separately from the body rows. Then I would need to consume and process the first row with a read call first before consuming and processing the rest in an all call.
This would be very useful. The workaround I use at the moment is resp = (head -n1 < /dev/tty), which works but is not very pretty (it also forces read from the tty, so it wouldn't automatically work to read from a pipe).
head -n1 works for pipe inputs as well.
For value inputs, take $n works. However, it consumes all the remaining values as well.
Having dedicated builtin commands for taking byte and value inputs is indeed useful.
FWIW, with the introduction of read-upto in #831, the following can be used to read a line of input:
resp = (read-upto "\n")
Or the following to remove the EOL at the end:
resp = (read-upto "\n")[:-1]
As someone who has just begun exploring elvish, I feel this is a pretty glaring omission. One can iterate over an input value stream using "each" but there's seemingly no other provisions for processing a sequence of inputs unless you capture the whole thing in a variable first - in which case you are no longer using the "pipeline" paradigm to process the data and the loop code no longer controls how much of that data you actually consume.
It should also be noted that "e:head -n 1" is not a solution even for byte streams, as the program makes no guarantees on how much data it will consume from the underlying file - only how much it will yield to the caller. (If the program uses buffered reads, it may read more than a line, and then write out data from the buffer until it has the requested number of lines. A second call to "e:head" on the same file descriptor would not necessarily start reading from the next line) Similarly "take 1" doesn't work since it consumes everything.