Improve promise returned by the `.pipe()` methods
Problem
The following currently results in an unhandled promise rejection:
await execa('node', ['--invalidFlag', 'script.js'])
.pipe(execa('cat'));
That's because .pipe() returns execa('cat'), so only the second child process is awaited, not the first one.
Solution
.pipe() should await both child processes instead of only the second one.
The return value can still be the second child process, which is the current behavior.
The returned promise would still resolve with the second child process' result, which is also the current behavior.
Additionally, it would now set result.pipedFrom or error.pipedFrom with the first child process' result/error.
Details
await execa('echo', ['foobar'])
.pipe('tr', ['o', 'O'])
Would return:
{
stdout: 'fOObar',
exitCode: 0,
failed: false,
...
pipedFrom: {
stdout: 'foobar',
exitCode: 0,
failed: false,
...
// `pipedFrom` is recursive, when piping multiple processes
},
}
When the second process fails, error.pipedFrom would contain the first process' result and output, which could be very useful for debugging.
When the first process fails, its error would be propagated as is, since this is what actually failed.
What do you think?
:+1: Looks like a good idea to me. I cannot think of any downsides.
I have been working on this feature, and it appears that the .pipe() should not return the second child process. Instead, it should only resolve/reject with the second child process' result.
Taking the following example: const returnValue = source.pipe(destination), the reasons are the following:
- It might be confusing to users whether
returnValueissourceordestination, without looking up the documentation.destinationmakes more sense. But some users might expectsourcesince "fluent" interfaces usually return the same value along the chain. - It might be confusing to users whether
returnValuemethods apply to bothsourceanddestination. For example, one might expect thatreturnValue.kill()would terminate bothsourceanddestination. - While
returnValueanddestinationwould resolve to the same value, they would need to use different promises. That's becausereturnValue's promise should await bothsourceanddestination. This results in an odd situation, wherereturnValueis the same child process asdestination, and resolve to the same value asdestination, but uses a different underlying promise. Not only is this confusing, but it is also tricky to implement. Since child processes cannot be cloned, the only way I could think of was to use aProxy. This works, but comes with its own quirks. - When the user passes an invalid
destinationargument, we would need to reject the call using a dummy child process. This is what we already do withexeca()when the process fails spawning, but this is a hack. The reason we do this is so that users can expect a child process to always be returned, even when there is an error.
All of the above is solved by requiring users not to do this:
const destination = execa(...).pipe(execa(...)).pipe(execa(...))
const result = await destination
But this instead:
const source = execa(...)
const middle = execa(...)
const destination = execa(...)
const result = await source.pipe(middle).pipe(destination)
This is more explicit, clearer and avoid the problems described above.
In the majority of cases though, users only need the result, not the child process instance itself. With the new std* options, there is now almost no good reasons to use the child process directly, except from calling .kill() or using IPC. So most of the times, they will do the following, which will still work:
const result = await execa(...).pipe(execa(...)).pipe(execa(...))