binderhub icon indicating copy to clipboard operation
binderhub copied to clipboard

Support running BinderHub on K8s without Docker

Open manics opened this issue 1 year ago • 4 comments

Proposed change

Docker has been removed from several K8s distributions. In addition there have been requests to run BinderHub on more restricted K8s distributions such as OpenShift https://discourse.jupyter.org/t/unable-to-attach-or-mount-volumes-unmounted-volumes-dockersocket-host/14950

Alternative options

Do nothing, though in future we may need to modify the deployment instructions to ensure Docker is available on the K8s hosts.

Who would use this feature?

Someone who wants to run BinderHub on K8s without Docker. Someone who wants to run BinderHub with reduced privileges.

(Optional): Suggest a solution

There are several non-Docker container builders available, include:

  • podman https://podman.io/
  • buildah (used by Podman for building images) https://buildah.io/
  • img https://github.com/genuinetools/img

repo2podman already works https://github.com/manics/repo2podman and it shouldn't be too hard to swap-in one of the other builders.

In theory it should be possible to run these without full privileges, with limited added capabilities, e.g.

  • https://www.redhat.com/sysadmin/podman-inside-container
  • https://blog.jessfraz.com/post/building-container-images-securely-on-kubernetes/

So far I've managed to get a proof-of-concept podman builder running using full privileges, supported by https://github.com/jupyterhub/binderhub/pull/1512 on AWS EKS:

image:
  name: docker.io/manics/binderhub-dev
  tag: 2022-07-25-20-00

registry:
  url: docker.io
  username: <username>
  password: <password>

service:
  type: ClusterIP

config:
  BinderHub:
    base_url: /binder/
    build_capabilities:
      - privileged
    build_docker_host: ""
    build_image: "ghcr.io/manics/repo2podman:main"
    hub_url: /jupyter/
    hub_url_local: http://hub:8081/jupyter/
    image_prefix: <username>/binder-
    auth_enabled: false
    use_registry: true
  Application:
    log_level: DEBUG

extraConfig:
  0-repo2podman: |
    from binderhub.build import Build
    class Repo2PodmanBuild(Build):
        def get_r2d_cmd_options(self):
            return ["--engine=podman"] + super().get_r2d_cmd_options()
    c.BinderHub.build_class = Repo2PodmanBuild

jupyterhub:
  hub:
    baseUrl: /jupyter
    networkPolicy:
      enabled: false
  proxy:
    service:
      type: ClusterIP
    chp:
      networkPolicy:
        enabled: false
  scheduling:
    userScheduler:
      enabled: false
  ingress:
    enabled: true
    pathSuffix: "*"
    pathType: ImplementationSpecific
    # https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/
    annotations:
      kubernetes.io/ingress.class: alb
      alb.ingress.kubernetes.io/group.name: binder
      alb.ingress.kubernetes.io/target-type: ip
      alb.ingress.kubernetes.io/scheme: internet-facing


ingress:
  enabled: true
  pathSuffix: "binder/*"
  pathType: ImplementationSpecific
  # https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/guide/ingress/annotations/
  annotations:
    kubernetes.io/ingress.class: alb
    alb.ingress.kubernetes.io/group.name: binder
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/scheme: internet-facing

There are several limitations:

  • Still requires a privileged container
  • No caching since it's not connecting to an external Docker daemon (probably need a host volume mount for the container store)
  • Docker registry is playing up, not sure if that's related or something else

manics avatar Jul 25 '22 21:07 manics

Hi,

Thanks for getting this started !

While waiting to get this definitely running, one alternative could be to have a cluster with nodes using different runtime (or just docker available) so that one can isolate the docker requiring pods to one or more dedicated nodes. If I have read correctly through the charts, this could be achieved by setting the config.BinderHub.build_node_selector value.

Is this something that should be documented ?

sgaist avatar Sep 02 '22 14:09 sgaist

I did some additional tests and realized that there is no need for an heterogeneous cluster. One can either have docker installed on the build nodes if the unix socket it used or use the dind deployment.

For the push part, manics/repo2podman/pull/32, is a starting point.

The next thing to do is to mount the docker credentials in a an appropriate folder and point podman to it (depending on whether the pod is run as a different user than root). This can be done in a similar fashion as for now however there might be a need to set the REGISTRY_AUTH_FILE environment variable for the build container.

sgaist avatar Sep 13 '22 16:09 sgaist

Following up on yesterday's conversation with @sgaist after https://github.com/jupyterhub/team-compass/issues/554 (please correct me if I've said anything incorrect or missed anything!)

  • If we use the podman system service command to run podman as a daemon it provides a Docker compatible API, which means repo2docker should just work, there's no need to use repo2podman
  • The implementation in https://github.com/jupyterhub/binderhub/pull/1531 is very close to what I've also come up with, and is probably the quickest way to add Podman support. It follows the Docker-in-Docker approach of running Podman build pods in a daemonset, with a host mounted socket, and host mounted container cache directory
  • We need to check that image cleaning still works (in theory it should since the Podman socket is compatible with Docker socket)

Nice to haves

  • Move away from the reliance on the host volume cache, for instance by using a PVC. Since we're using a daemonset all build pods would mount the same PVC. Whether Podman can handle multiple Podman build processes running off the same cache directory is unknown.
  • Don't run as a privileged pod. It should be possible to run Podman as an unprivileged pod with limited additional container capabilities. This depends on the underlying host cgroups configuration though.
  • Support a self-contained build pod that requires minimal host support:
    • Replace the host volume container cache directory with e.g. a PVC per pod, perhaps using a StatefulSet?
    • Replace the host volume socket with a service that exposes the Podman API. Note that whereas Docker can listen on a TLS protected socket with client and server certificates, Podman does not support this, it can only listen on a socket or an unencrypted HTTP endpoint, so in the short term this requires running a TLS proxy in the Podman pod to front the podman service.
  • Support non-daemon builders, such as https://github.com/genuinetools/img. It's already possible to run Podman without a daemon using the repo2podman container with some additional privileges, but there is no shared build cache. One way around this is to mount a build cache volume into the build pod, but that only works if the builder can handle multiple build processes simultaneously using the same cache.

The Podman-in-Kubernetes is the quickest solution. The nice to haves require significantly more investigation and work so may be best left for a future PR, unless we come up with a good plan now for potentially re-architecting BinderHub.

manics avatar Sep 21 '22 13:09 manics

I tested the image cleaner and from the looks of it, it is working. The script itself does not do the disk check in a docker specific way therefore the fact that it is watching the podman storage folder rather than the docker equivalent bears no consequences in its activities.

However, there might be one thing that we maybe should add to the documentation somewhere: unless the cleaner is connected to the host Docker daemon, and the node uses cri-dockerd (k8s >= 1.24), it cannot be relied upon to lower the disk pressure in the kubernetes image storage context.

sgaist avatar Sep 23 '22 08:09 sgaist

Thank-you very much for your suggestion. I just add an extraConfig to overload DockerRegistry.get_image_manifest for additional header needed to get image manifest from internal Openshift/OKD registry and now binder works on my Openshift/OKD instances:

  ....
  use_registry: true
  image_prefix: default-route-openshift-image-registry.example.com/<namespace>/binderhub-
  DockerRegistry:
    url: https://default-route-openshift-image-registry.example.com
    token_url: https://default-route-openshift-image-registry.example.com/openshift/token?account=serviceaccount
    username: serviceaccount
    password: <default_builder_serviceaccount_token>

extraConfig:
  0-repo2podman: |
    from binderhub.build import Build
    class Repo2PodmanBuild(Build):
        def get_r2d_cmd_options(self):
            return ["--engine=podman"] + super().get_r2d_cmd_options()
    c.BinderHub.build_class = Repo2PodmanBuild

  1-openshift-registry: |
    import base64
    import json
    import os
    from urllib.parse import urlparse

    from tornado import httpclient
    from tornado.httputil import url_concat
    from traitlets import Dict, Unicode, default
    from traitlets.config import LoggingConfigurable
    from binderhub.registry import DockerRegistry
    class DockerRegistryOKD(DockerRegistry):
      async def get_image_manifest(self, image, tag):
        client = httpclient.AsyncHTTPClient()
        url = f"{self.url}/v2/{image}/manifests/{tag}"
        # first, get a token to perform the manifest request
        if self.token_url:
            auth_req = httpclient.HTTPRequest(
                url_concat(
                    self.token_url,
                    {
                        "scope": f"repository:{image}:pull",
                        "service": "container_registry",
                    },
                ),
                auth_username=self.username,
                auth_password=self.password,
            )
            auth_resp = await client.fetch(auth_req)
            response_body = json.loads(auth_resp.body.decode("utf-8", "replace"))

            if "token" in response_body.keys():
                token = response_body["token"]
            elif "access_token" in response_body.keys():
                token = response_body["access_token"]
                
            # On OKD/Openshift need additional header "Accept: application/vnd.oci.image.manifest.v1+json header"
            req = httpclient.HTTPRequest(
                url,
                headers={"Authorization": f"Bearer {token}","Accept": "application/vnd.oci.image.manifest.v1+json"},
            )
        else:
            # Use basic HTTP auth (htpasswd)
            req = httpclient.HTTPRequest(
                url,
                auth_username=self.username,
                auth_password=self.password,
            )

        try:
            resp = await client.fetch(req)
        except httpclient.HTTPError as e:
            if e.code == 404:
                # 404 means it doesn't exist
                return None
            else:
                raise
        else:
            return json.loads(resp.body.decode("utf-8"))
    c.BinderHub.registry_class = DockerRegistryOKD

depouill avatar Oct 25 '22 08:10 depouill

Most of this was done in https://github.com/jupyterhub/binderhub/pull/1531 ! There are a few follow-ups but the key requirement (run without Docker) is done!

manics avatar Dec 15 '22 16:12 manics