dockerode
dockerode copied to clipboard
exec Stream Doesn't Emit End Event
I want to execute a command in an already running container using "exec". For some reason, the stream returned by "exec.start()" is not emitting the "end" event. So, I have no way of knowing when the command completes and the Node process doesn't end.
Example code is below. Am I doing something wrong, or is this a bug?
const Docker = require("dockerode");
const docker = new Docker();
const PassThrough = require("stream").PassThrough;
(async () => {
try {
// Create/start container, if necessary.
let container = docker.getContainer("test");
let info;
try {
info = await container.inspect();
}
catch (error) {
if (error.statusCode !== 404)
throw error;
container = await docker.createContainer({
Image: "ubuntu",
Tty: true,
name: "test"
});
}
info = await container.inspect();
if (info.State.Running !== true)
await container.start();
// Execute 'cat' with data piped to stdin.
const exec = await container.exec({
AttachStdin: true,
AttachStdout: true,
AttachStderr: true,
Tty: false,
Cmd: ["cat"]
});
const stream = await exec.start({
hijack: true,
stdin: true
});
const input = new PassThrough();
input.write("test one\ntest two\n");
input.pipe(stream.output);
docker.modem.demuxStream(stream.output, process.stdout, process.stderr);
stream.output.on("end", async () => { // Event is not emitted.
console.log(await exec.inspect());
});
}
catch (error) {
console.error(error);
}
})();
Ended up doing this, which gave me the expected result.
async function runExec(container, options) {
await new Promise((resolve, reject) => container.exec(options, function(err, exec) {
if (err) {
reject(err)
}
exec.start({ I: true, T: true }, function(err, stream) {
if (err) {
reject(err)
}
stream.on('end', function() {
resolve()
})
docker.modem.demuxStream(stream, process.stdout, process.stderr)
exec.inspect(function(err, data) {
if (err) {
return
}
console.log(data)
});
})
}))
}
.
.
.
await runExec(container, {
Cmd: ['/bin/sh', '-c', `.......`],
AttachStdin: true,
AttachStdout: true,
AttachStderr: true,
})
@DavidRusso I'm having the same issue and the solution proposed by @jankoritak didn't work for me. Did you find a solution?
I never found a 'real' solution. I worked around by polling for completion by calling exec.inspect() via setInterval() and checking the 'Running' flag.
I am also running into this issue. The 'end' event does not fire in my case even though the command finishes.
I'm seeing the same thing on Windows but not on Mac, where I do get the end event. Which platforms are you all seeing this issue?
I can confirm that this bug exists in Windows - or at least it did when I first commented about it here. Haven't tested in macOS.
On Tue, Oct 20, 2020, 17:16 Phillip Hoff [email protected] wrote:
I'm seeing the same thing on Windows but not on Mac, where I do get the end event. Which platforms are you all seeing this issue?
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/apocas/dockerode/issues/534#issuecomment-712967046, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADXDBGZCVG76IVXFKJ5CENDSLWZVTANCNFSM4IVWDJ2Q .
FYI: It looks like it might be the same or similar to a long outstanding issue in docker-modem. As the issue suggests, the issue no longer repros for me when switching to to the TCP endpoint. (I can't use that workaround in practice, but perhaps it at least hints at the underlying issue.)
Has this problem solved on windows?
My following function never end on windows:
async function exec(service, cmd) {
const docker = new Docker();
const containers = await docker.listContainers();
const containerInfo = containers.find(
(c) => c.State == "running" && c.Names.find((n) => n.includes(service))
);
const container = await docker.getContainer(containerInfo.Id);
const exec = await container.exec({
Cmd: cmd,
AttachStdout: true,
AttachStderr: true,
});
const stream = await exec.start({});
const finish = new Promise((resolve) => stream.on("end", resolve));
docker.modem.demuxStream(stream, process.stdout, process.stderr);
await finish
}
a workaround as @DavidRusso suggested:
async function exec(service, cmd) {
const docker = new Docker();
const containers = await docker.listContainers();
const containerInfo = containers.find(
(c) => c.State == "running" && c.Names.find((n) => n.includes(service))
);
const container = await docker.getContainer(containerInfo.Id);
const exec = await container.exec({
Cmd: cmd,
AttachStdout: true,
AttachStderr: true,
});
const stream = await exec.start({});
const finish = new Promise((resolve) => {
// stream.on("end", resolve)
// workaround
const timer = setInterval(async () => {
const r = await exec.inspect();
if (!r.Running) {
clearInterval(timer);
stream.destroy();
resolve();
}
}, 1e3);
});
docker.modem.demuxStream(stream, process.stdout, process.stderr);
await finish
}
If this bug fixed, please let me know. Thank you guys.
I basically understand the underlying bug here (but am still working on figuring out what the right fix is).
Docker heavily uses Duplex sockets: both sides of the socket can write, and receive messages on the other side. The two sides of the socket can be closed independently (e.g., you can close to tell the client to stop reading but the client is still allowed to keep writing).
On Windows, we need a compatibility layer for Duplex sockets over Named Pipes. Docker uses this library: microsoft/go-winio
My knowledge of Named Pipes (the underlying Windows OS primitive) is a bit sketchy…but as far as I can tell, they don’t really have a built-in notion of one-sided Close? And I think WinIO works around this by implementing it as a 0-length message, and treating that as a “close” message. See this comment:
go-winio/pipe.go at d68e55cd0b80e6e3cac42e945f1cff6ddb257090 · microsoft/go-winio
But because this is a fake convention, I’m not sure how common it is for other platforms / frameworks to respect this convention.
I don’t think NodeJS’s implementation of Named Pipes will respect this. In particular, the NodeJS standard library tends to wrap everything in byte streams, and doesn't really give you access to the underlying messages. So NodeJS happily consumes the 0-length message and expects the socket is still open.
Possible options to fix this:
- Change Docker to close the whole duplex socket when one side dies instead of leaving it half-open.
- Write a little NodeJS C module for
dockerode/docker-modemthat matches go-winio's semantics around Named Pipes - something else???
filed a more low-level repro case here: https://github.com/microsoft/go-winio/issues/257
docker desktop 4.12 has a compatibility shim that should fix this. https://docs.docker.com/desktop/release-notes/#bug-fixes-and-minor-changes
Works for me now
const runExec = async function (
container: Docker.Container,
command: string[]
) {
const exec = await container.exec({
Cmd: command,
AttachStdout: true,
AttachStderr: true,
User: 'www-data',
})
return new Promise((resolve, reject) => {
exec.start({}, (err, stream) => {
if (stream) {
stream.setEncoding('utf-8')
stream.on('data', console.log)
stream.on('end', resolve)
}
})
})
}
await runExec(container, ['php', 'occ', 'config:system:set', 'enforce_theme', '--value', 'light'])
$ docker -v
Docker version 20.10.18, build b40c2f6b5d
I use Windows but there has been no issue so far