docker-py icon indicating copy to clipboard operation
docker-py copied to clipboard

How to get writable io socket when execute exec_start method

Open newthis opened this issue 5 years ago • 5 comments

Hi, guys. Below is my code snippet:

cid = "cec5e731c0ac" cmd_info="/bin/bash" client = docker.APIClient(base_url="tcp://127.0.0.1:2377") eid = client.exec_create(cid, cmd=cmd_info, privileged=True) soc_ret = client.exec_start(exec_id=eid, socket=True) soc_ret.write(b'echo hello world') client.close()

However, the error io.UnsupportedOperation: File or stream is not writable. occured when executing the code snippet. So I want to know if there is way to get a writable socket using the exec_start method.

newthis avatar Jan 05 '21 01:01 newthis

_, socket = client.containers.get("2075").exec_run("sh", stdin=True, socket=True)
print(socket)
socket._sock.sendall(b"ls -la\n")
try:
 unknown_byte=socket._sock.recv(docker.constants.STREAM_HEADER_SIZE_BYTES)
 print(unknown_byte)

 buffer_size = 4096 # 4 KiB
 data = b''
 while True:
  part = socket._sock.recv(buffer_size)
  data += part
  if len(part) < buffer_size:
   # either 0 or end of data
   break
 print(data.decode("utf8"))

except Exception: 
 pass
socket._sock.send(b"exit\n")

ghost avatar Sep 10 '21 10:09 ghost

Accessing an undocumented, untyped, internal member of the SocketIO object is not a good solution.

When called with socket=True, the DockerClient will send an HTTP POST request to the daemon that requests to upgrade the connection to a TCP connection. If the request succeeds, the underlying SocketIO belonging to the received requests.Response object is then returned. This works because the underlying socket.socket has been upgraded to a complete TCP socket which the server (daemon) keeps open rather than closing after the request's data is recv'd. However, the requests library assumes that the underlying SocketIO for a requests.Response is read-only (after all, it is a response that is typically only read from).

It appears that the writability of the returned SocketIO should be determined by the value of the stdin argument (writable if stdin=True). Implementation would look like:

def exec_run(self, cmd, stdout=True, stderr=True, stdin=False, tty=False,
                 privileged=False, user='', detach=False, stream=False,
                 socket=False, environment=None, workdir=None, demux=False):
        """
        Run a command inside this container. Similar to
        ``docker exec``.

        Args:
            cmd (str or list): Command to be executed
            stdout (bool): Attach to stdout. Default: ``True``
            stderr (bool): Attach to stderr. Default: ``True``
            stdin (bool): Attach to stdin. Default: ``False``
            tty (bool): Allocate a pseudo-TTY. Default: False
            privileged (bool): Run as privileged.
            user (str): User to execute command as. Default: root
            detach (bool): If true, detach from the exec command.
                Default: False
            stream (bool): Stream response data. Default: False
            socket (bool): Return the connection socket to allow custom
                read/write operations. Default: False
            environment (dict or list): A dictionary or a list of strings in
                the following format ``["PASSWORD=xxx"]`` or
                ``{"PASSWORD": "xxx"}``.
            workdir (str): Path to working directory for this exec session
            demux (bool): Return stdout and stderr separately

        Returns:
            (ExecResult): A tuple of (exit_code, output)
                exit_code: (int):
                    Exit code for the executed command or ``None`` if
                    either ``stream`` or ``socket`` is ``True``.
                output: (generator, bytes, or tuple):
                    If ``stream=True``, a generator yielding response chunks.
                    If ``socket=True``, a socket object for the connection.
                    If ``demux=True``, a tuple of two bytes: stdout and stderr.
                    A bytestring containing response data otherwise.

        Raises:
            :py:class:`docker.errors.APIError`
                If the server returns an error.
        """
        resp = self.client.api.exec_create(
            self.id, cmd, stdout=stdout, stderr=stderr, stdin=stdin, tty=tty,
            privileged=privileged, user=user, environment=environment,
            workdir=workdir,
        )
        exec_output = self.client.api.exec_start(
            resp['Id'], detach=detach, tty=tty, stream=stream, socket=socket,
            demux=demux
        )
        if stream:
            return ExecResult(None, exec_output)
        
        if socket:
            if isinstance(exec_output, socket.socket):
                # For some reason, _get_raw_response_socket actually returns the raw socket.socket object for HTTPS connections as opposed to the socket.SocketIO returned for regular HTTP connections
                # It might be worth normalizing this to always return a socket.SocketIO?
                return ExecResult(None, exec_output)
            else:
                # Create and return new SocketIO that is writable if stdin=True
                mode: str = 'rwb' if stdin else 'rb'
                return ExecResult(None, exec_output._sock.makefile(mode=mode, buffering=0)

        return ExecResult(
            self.client.api.exec_inspect(resp['Id'])['ExitCode'],
            exec_output
        )
```

xintenseapple avatar Jul 31 '24 20:07 xintenseapple

This is still an issue one year on. Any update on this? Is docker abandoning the docker python sdk?

allfro avatar Apr 12 '25 21:04 allfro

I'm looking for an example of how to write to stdin for exec_run but cannot find anything in the docs or tests. The incomplete type annotations aren't helpful either.

mickvangelderen avatar Jul 22 '25 06:07 mickvangelderen

After looking into how the docker API works and how the python docker SDK client is implemented, and trying to implement the exec functionality with asyncio and h11 for http protocol parsing, I've come up with something that allows me to write to stdin without crazy amounts of custom code.

import socket
from contextlib import contextmanager
from typing import Any, Callable, Type, TypeVar

import docker
import docker.utils.socket

T = TypeVar("T")


def expect_type(value: Any, type_: Type[T]) -> T:
    if isinstance(value, type_):
        return value
    raise TypeError(f"Expected value of type {type_!r} but got {value!r} of type {type(value)!r}")


@contextmanager
def cleanup(f: Callable[[], None]):
    """
    Runs the provided cleanup function after the with block finishes (not swallowing `Exception`s raised during cleanup)
    or when an `Exception` is raised (swallowing any `Exception`s raised during cleanup).
    """
    try:
        yield
    except Exception:
        try:
            f()
        except Exception:
            pass
        raise
    f()


def collect_output(sock: socket.socket) -> tuple[bytes, bytes]:
    stdout = bytearray()
    stderr = bytearray()
    for id, chunk in docker.utils.socket.frames_iter_no_tty(sock):  # type: ignore
        assert isinstance(id, int)
        assert isinstance(chunk, bytes)
        if id == 1:
            stdout.extend(chunk)
        elif id == 2:
            stderr.extend(chunk)
        else:
            raise ValueError("unexpected stream: {stream!r}")
    return bytes(stdout), bytes(stderr)


def test_exec():
    client = docker.from_env()
    image = client.images.pull("alpine")
    container = client.containers.create(image, command=["tail", "-f", "/dev/null"], auto_remove=True, detach=True)
    container.start()

    # Kill the container instead of stopping it. Stop waits for the process to complete which it never does on
    # purpose (tail -f /dev/null).
    with cleanup(lambda: container.kill()):  # type: ignore
        container_id = expect_type(container.id, str)
        exec = client.api.exec_create(container_id, ["cat"], stdin=True)  # type: ignore
        exec_id = expect_type(exec["Id"], str)
        sock_io = expect_type(client.api.exec_start(exec_id, stream=True, socket=True, demux=True), socket.SocketIO)  # type: ignore
        # By accessing the inner socket.socket we enable ourselves to close the write half. I could not find a stable
        # API to unwrap socket.SocketIO into it's inner socket.socket.
        sock = expect_type(sock_io._sock, socket.socket)  # type: ignore

        input_data = "Hello, world!\nThis is test data\n"

        with cleanup(lambda: sock.close()):
            sock.sendall(input_data.encode())
            sock.shutdown(socket.SHUT_WR)
            stdout, stderr = collect_output(sock)

        assert stdout.decode() == input_data
        assert stderr.decode() == ""

        inspect = client.api.exec_inspect(exec_id)  # type: ignore
        exit_code = expect_type(inspect["ExitCode"], int)  # type: ignore
        assert exit_code == 0

Because the code assumes exec_start returns a value of type socket.SocketIO, and the accesses the private _sock field, I'm not sure if it will work on all platforms and for all types of connections.

mickvangelderen avatar Jul 24 '25 05:07 mickvangelderen