Deadlock if synchronous subprocess fills pipe
If a subprocess outputs large amount of data, it deadlocks both parent and subprocess due to pipe blocking:
#include <stdlib.h>
#include <stdio.h>
#include "subprocess.h"
int main() {
const char *command_line[] = {"dd", "if=/dev/zero", "bs=1k", "count=65", NULL};
struct subprocess_s process;
int result = subprocess_create(command_line, subprocess_option_search_user_path, &process);
if (result) {
fprintf(stderr, "Failed to create subprocess: %d\n", result);
return 1;
}
int proc_return;
result = subprocess_join(&process, &proc_return);
if (result) {
fprintf(stderr, "Failed to join subprocess\n");
return 1;
}
printf("Subprocess returned %d\n", proc_return);
result = subprocess_destroy(&process);
if (result) {
fprintf(stderr, "Failed to join subprocess\n");
return 1;
}
return 0;
}
Notice that subprocess dd writes 65k data to stdout, which is greater than Linux's default pipe buffer size, thus blocking the child. However, since subprocess_read_stdout must be used after joining, the parent process cannot progress either, as it can neither drain the pipe nor wait for child to finish.
I can't think of a better solution than to advise you use async. I don't really want to start spawning threads behind your back to handle this kind of thing (and don't know of another way to generally fix this!).
For my case, I am actually not using the outputs. So it might be good to allow ignoring stdout/stderr.
Actually, did I misunderstand something? It seems like I can read from stdout before I join. At least it works on Linux.
I might be able to add an option to ignore stdout/stderr aye.
You should be able to read before join iirc, its just that if you don't have enough data to read it could block forever.
I thought it should return EOF?