caddy icon indicating copy to clipboard operation
caddy copied to clipboard

Unix socket admin endpoint doesn't accept "localhost" as host name.

Open NiklasBeierl opened this issue 11 months ago • 10 comments

Hey everyone,

the documentation states:

origins configures the list of origins that are allowed to connect to the endpoint. A default is intelligently chosen: if the listen address is loopback (e.g. localhost or a loopback IP, or a unix socket) then the allowed origins are localhost, ::1 and 127.0.0.1, joined with the listen address port (so localhost:2019 is a valid origin).

However, it seems like localhost as hostname gets rejected:

/etc/caddy # caddy --version
v2.9.1 h1:OEYiZ7DbCzAWVb6TNEkjRcSCRGHVoZsJinoDR/n9oaY=
/etc/caddy # curl --version
curl 8.12.0 (x86_64-alpine-linux-musl) libcurl/8.12.0 OpenSSL/3.3.2 zlib/1.3.1 brotli/1.1.0 zstd/1.5.6 c-ares/1.33.1 libidn2/2.3.7 libpsl/0.21.5 nghttp2/1.62.1
Release-Date: 2025-02-05
Protocols: dict file ftp ftps gopher gophers http https imap imaps ipfs ipns mqtt pop3 pop3s rtsp smb smbs smtp smtps telnet tftp ws wss
Features: alt-svc AsynchDNS brotli HSTS HTTP2 HTTPS-proxy IDN IPv6 Largefile libz NTLM PSL SSL threadsafe TLS-SRP UnixSockets zstd
/etc/caddy # echo $CADDY_ADMIN
unix//var/run/caddy.sock
/etc/caddy # caddy reload
2025/02/09 19:22:57.702 INFO    using adjacent Caddyfile
2025/02/09 19:22:57.703 INFO    adapted config to JSON  {"adapter": "caddyfile"}
/etc/caddy # curl --unix-socket /var/run/caddy.sock http://127.0.0.1/reverse_proxy/upstreams
[]
/etc/caddy # curl --unix-socket /var/run/caddy.sock http://localhost/reverse_proxy/upstreams
{"error":"host not allowed: localhost"}

I can also observe this behaviour with caddy 2.8.4.

NiklasBeierl avatar Feb 09 '25 19:02 NiklasBeierl

How can we reproduce this? (What's your config?)

mholt avatar Feb 10 '25 16:02 mholt

Ahh sorry, I wanted to include that in the copied shell excerpt, apparently I omitted a few lines.

You can set CADDY_ADMIN in the environmant or use the admin directive in Caddyfile. Minimum viable config file that I came up with is this:

{ 
  admin unix//var/run/caddy.sock
}

http://localhost:2000 {
  file_server {
    root .
  }
}

NiklasBeierl avatar Feb 10 '25 20:02 NiklasBeierl

I found reason why it happens. You can check this comment. https://github.com/Geun-Oh/caddy/blob/0d7c63920daecec510202c42816c883fd2dbe047/admin.go#L316-L343.

If it's okay to access admin with localhost, Maybe I can work with it.

Geun-Oh avatar Feb 18 '25 14:02 Geun-Oh

@Geun-Oh thanks for digging it up. It is indeed a very nasty edgecase.

@mholt I think I can follow your reasoning in the linked source comment. But I don't really see a relevant threat here.

The only threat to caddy that validating the hostname could hypothetically mitigate here is a confused deputy: Another component that caddy "trusts" (the deputy) is tricked into submitting a request to caddy, believing it is talking to some other system. Since we are talking about unix domain sockets, this deputy must be a local process with r/w access to the socket file.

If an attacker achieves arbitrary file read/write or arbitrary domain socket requests it's game over no matter the hostname validation. The attacker just needs to grab the allowed origins from caddy source.

So the only scenario that I see remaining relevant is a deputy that the attacker can ask to perform "almost" arbitrary requests over domain sockets but imposes some restrictions. In my opinion, running such a component would be outright security-suicide but I will entertain it for a minute:

Let's say browsers where to connect to unix domain sockets in the future (I seriously doubt it). And allow other sites to make requests to domain sockets (complete nightmare). The only reasonable way to extend the same origin policy to them would be to consider different socket files different origins in the same way that different ports on localhost are different origins. This would therefore leave a caddy admin with a domain socket exactly as exposed / protected as a caddy admin bound to the loopback address.

There is nothing to gain here from a security perspective. And everyone that uses the domain socket will just be confused and frustrated.

It is unfortunate that some clients demand a bogus hostname, but I guess this is outside our control. For caddy I'd join the mentioned colleagues from the infosec community in recommending to turn off hostname validation for domain sockets: Again, I don't see anything to gain here.

At the very least I'd put a very big note about this circumstance into the documentation and ask you to add localhost and / or caddy as accepted hostnames, because specifying an ip-address for a connection that doesn't have anything to do with ip feels really really weird.

NiklasBeierl avatar Feb 18 '25 19:02 NiklasBeierl

@NiklasBeierl Well, I really enjoyed reading your post. For as much as I am able to comprehend it on a flu-ridden sick day, I agree.

I'm open to considering that we don't validate hostnames for UDS, but: would adding localhost back in be acceptable, or would it be security theater?

mholt avatar Feb 18 '25 22:02 mholt

I hope you get better soon! 🍵

I'm open to considering that we don't validate hostnames for UDS, but: would adding localhost back in be acceptable, or would it be security theater?

I wouldn't have called it security-theater, because I mostly associate that term with things that make a lot of fuzz about enhancing security while not actually doing so.

Here we are looking not at the opposite, but the inverse situation: Disabling hostname validation won't hurt, allthough it kinda "feels wrong". 😅

But yeah, in the sense of "adding unnecessary friction instead of security", it really would be security-theater. :)

NiklasBeierl avatar Feb 19 '25 20:02 NiklasBeierl

From what I gather above, despite some statements seeming to be about security with unix sockets in general rather than the admin endpoint.. Are the proposed changes only related to affecting interaction with the admin endpoint?

Just seeking to confirm that any changes proposed here won't affect using unix sockets like in the example below (unrelated to the admin endpoint).


The only reasonable way to extend the same origin policy to them would be to consider different socket files different origins in the same way that different ports on localhost are different origins. This would therefore leave a caddy admin with a domain socket exactly as exposed / protected as a caddy admin bound to the loopback address.

There is nothing to gain here from a security perspective. And everyone that uses the domain socket will just be confused and frustrated.

I'm not sure if this relates to what you're discussing, but I use unix sockets to proxy access to the host Docker socket for containers.

  • This allows me to have a single instance of Caddy with multiple unix sockets that expose only the access a client actually requires to operate with an API through a given unix socket.
  • Sharing a unix socket with containers via a volume keeps this simple. If I did so via TCP ports, each container would need to be connected to this Caddy instance via network(s) and additional constraints to restrict which client connections to allow at that port/site-address.

For example, with the test service I can run it with different sockets:

$ docker compose up -d --force-create
$ TEST_SOCKET=hello.sock docker compose run --rm test
curl: (22) The requested URL returned error: 403

$ TEST_SOCKET=world.sock docker compose run --rm test
28.0.1

# From the host:
$ curl -sSf --unix-socket /tmp/sockets/example.sock localhost/version | jq -r .Version
28.0.1

In the Caddyfile I included below a few examples of also using the site-address to route a request by hostname:

# Match a site-block with specific site-address:
$ TEST_SOCKET=hello.sock docker compose run --rm test \
  curl -w '\n' -sSf --unix-socket /var/run/docker.sock http://hello.internal/version
(hello) world

# Same site-address, different site-block due to bind:
$ TEST_SOCKET=world.sock docker compose run --rm test \
  curl -w '\n' -sSf --unix-socket /var/run/docker.sock http://hello.internal/version
hello (world)


# Site address only for a specific socket:
$ TEST_SOCKET=world.sock docker compose run --rm test \
  curl -w '\n' -sSf --unix-socket /var/run/docker.sock http://world.internal/version
Hello world.sock

# No match for `hello.sock`,
# fallback to global `http://` for the bind to attempt querying the Docker socket:
$ TEST_SOCKET=hello.sock docker compose run --rm test \
  curl -w '\n' -sSf --unix-socket /var/run/docker.sock http://world.internal/version
curl: (22) The requested URL returned error: 403

NOTE: Reproduction for the above command examples is available below.


The only threat to caddy that validating the hostname could hypothetically mitigate here is a confused deputy: Another component that caddy "trusts" (the deputy) is tricked into submitting a request to caddy, believing it is talking to some other system. Since we are talking about unix domain sockets, this deputy must be a local process with r/w access to the socket file.

FWIW I've been using multiple unix sockets to restrict access to the unix socket for docker (/var/run/docker.sock) on the host:

  • Caddy controls granular API access (based on ENV and which unix socket received a request from the client) via a request matcher (CEL) that guards forwarding the request to the real docker unix socket on the container host.
  • Those unix sockets are distributed to other containers via a single named data volume, where the containers that need access are only granted 1 of the unix sockets via a volume subpath mount.
  • I could additionally route based on the hostname of the request, but in this case only the URI path of the request is relevant (hostname would vary based on the client calling the API). It's also why I'm using unix sockets for access control in the first place, since TCP sockets AFAIK would be more complicated to restrict access based on the connecting client.

So while the processes interacting with Caddy via unix sockets are local, they are isolated from each other via separate containers (that each only have a single unix socket accessible to them).

Reproduction

Caddyfile:

# Docker Socket proxy example:
# - Access to the real host docker socket is configured via ENV using a CEL matcher.
# - Access can differ by unix socket specific ENV overrides.
# NOTE: Unix socket binds and matching a request to a site-block:
# - The first site-block to bind a socket without a specific site-address will be matched as fallback for that bind.
# - Ports have no relevance when matching a site-block when a unix socket is the source of the request to match.
# - Global `bind` can be set to a unix socket if this instance of Caddy should not default to TCP bind to `0.0.0.0`.
http:// {
  # `CADDY_SOCKET_BINDS` is an optional ENV providing a space delimited list of socket paths to bind:
  import docker-api-proxy {$CADDY_SOCKET_BINDS:unix//var/run/caddy/docker.sock}
}

# Additional examples for site-address routing by bounded unix sockets.
# These have higher specificity when matching by site-address, thus have priority over `http://`
# NOTE: `|0222` is used to avoid a race condition when `CADDY_SOCKET_BINDS` ENV sets this for `world.sock`

# Two site-blocks with the same site-address with a response
# that differs by which socket the request arrives at:
http://hello.internal {
  bind unix//var/run/caddy/hello.sock
  respond "(hello) world"
}
http://hello.internal {
  bind unix//var/run/caddy/world.sock|0222
  respond "hello (world)"
}

# Only `world.sock` listens to this site-address,
# `hello.sock` thus would fallback to the generic `http://` site-block
http://world.internal {
  bind unix//var/run/caddy/world.sock|0222
  respond "Hello world.sock"
}
Caddy snippets (for docker socket proxy support)

Reference: https://github.com/caddyserver/caddy/issues/6584#issuecomment-2384694090

(docker-api-proxy-matcher) {
  @permit-endpoint <<CEL
    [{
      "socket": [{http.request.local}].map(s,
         s.substring(s.lastIndexOf('/') + 1, s.lastIndexOf('.sock'))
      )[0],
      "suffix": {path.0}.replace('_', '')
    }]

    .exists(inputs,
      [['ALLOW', 'DENY'].map(rule,
        [
          [inputs.socket, rule, inputs.suffix],
          [rule, inputs.suffix],
          [inputs.socket, rule],
          [rule],
        ]
        .map(env_parts, env_parts.join('_'))
        .map(env_name, [env_name, env_name + '_' + {method}].exists(key,
          ph(req, 'env.' + key.upperAscii())
            .split(',')
            .filter(v, size(v.trim()) > 0)
            .exists(value,
              env_name.endsWith(inputs.suffix)
                ? {path}.endsWith('/' + value)
                : value == {path.0}
            )
        ))
      )]

      .map(arr, { "allowed": arr[0], "denied": arr[1] }).exists(results,
        [0, 1, 2, 3].exists(i,
          results.allowed[i] && !(true in results.denied.slice(0, i + 1))
        )
      )
    )
  CEL
}

# As a snippet this allows providing multiple sockets to listen on via import args.
(docker-api-proxy) {
  bind {args[:]}

  # If the version prefix exists, strip it before matching `@permit-endpoint`:
  @version path_regexp ^/(v[\d\.]+)/
  uri @version strip_prefix {re.version.1}

  # The @permit-endpoint matcher:
  import docker-api-proxy-matcher
  handle @permit-endpoint {
    # Due to earlier potential `uri strip_prefix`,
    # restore the original request URI when forwarding to the host mounted docker socket:
    reverse_proxy {$HOST_SOCKET:unix//var/run/docker.sock} {
      rewrite {http.request.orig_uri}
    }
  }

  # Permission was denied:
  handle {
    respond "Forbidden" 403
  }
}
`compose.yaml` config
name: example

services:
  docker-socket-proxy:
    image: caddy:2.9
    container_name: caddy
    tty: true
    environment:
      # Demonstrating multiple unix socket proxy support:
      # - 2 container unix sockets (`world.sock` is accessible to any user, not just the file owner)
      # - 1 unix socket on the host via directory bind mount
      CADDY_SOCKET_BINDS: unix//var/run/caddy/hello.sock unix//var/run/caddy/world.sock|0222 unix//tmp/sockets/example.sock

      # API access control per unix socket:
      # Allow every socket to query `/version` and GET requests for `/networks` (`docker network ls`):
      ALLOW: version
      ALLOW_GET: networks
      # Overrides per unix socket by filename prefix:
      # - `hello.sock` is denied `/version`
      # - `world.sock` is denied `/networks`, but can query `docker ps` (GET `/containers/json`)
      # - `example.sock` is allowed any request type to `/containers` + `/exec` to support `docker exec`
      #   which also needs HEAD `/_ping` to query API version for subsequent requests.
      HELLO_DENY: version
      WORLD_ALLOW_GET: containers
      WORLD_DENY: networks
      EXAMPLE_ALLOW: containers,exec,_ping
    # Required for SELinux to permit mounting the host docker socket:
    security_opt:
      - label:disable
    volumes:
      # Caddyfile from above with snippets included:
      - ./Caddyfile:/etc/caddy/Caddyfile:ro
      # The host docket socket to proxy:
      - /var/run/docker.sock:/var/run/docker.sock:ro
      # Named data volume for sharing unix sockets to other containers:
      - caddy-sockets:/var/run/caddy/:rw
      # Optionally share a unix socket to the host:
      - /tmp/sockets/:/tmp/sockets/:rw

  test:
    command: /bin/ash -c 'curl -sSf --unix-socket /var/run/docker.sock http://localhost/version | jq -r .Version'
    # Prevents starting this service for `docker compose up`:
    scale: 0
    # Builds `example/test` image locally with support for extra commands:
    build:
      dockerfile_inline: |
        FROM alpine
        RUN apk add curl jq docker-cli
    # Long-syntax required to use named volume with subpath.
    # This allows a common volume for the unix sockets,
    # whilst only providing a single socket for a client container to use.
    volumes:
      - type: volume
        source: caddy-sockets
        # Drop-in compatibility with containers that default to this specific unix socket path:
        target: /var/run/docker.sock
        read_only: true
        volume:
          nocopy: true
          # The unix socket to grant access to:
          subpath: ${TEST_SOCKET:-hello.sock}

volumes:
  caddy-sockets:
    name: caddy-sockets

Examples of docker socket proxy access via docker CLI command + ENV for constraints per socket:

Click to view
# hello.sock:
$ TEST_SOCKET=hello.sock docker compose run --rm test docker ps
Error response from daemon: Forbidden

$ TEST_SOCKET=hello.sock docker compose run --rm test docker network ls
NETWORK ID     NAME              DRIVER    SCOPE
0b9d4b0dc021   bridge            bridge    local
1cf615f7dbc4   example_default   bridge    local
74f1671c2250   host              host      local
2a7a79e0d17e   none              null      local

# Only owner of socket has access:
$ TEST_SOCKET=hello.sock docker compose run --rm --user 1337:1337 test docker network ls
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.47/networks": dial unix /var/run/docker.sock: connect: permission denied


# world.sock:
$ TEST_SOCKET=world.sock docker compose run --rm test docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS                  PORTS                                NAMES
8037c75b10f7   example-test   "docker ps"              1 second ago    Up Less than a second                                        example-test-run-8a0a128b440b
4f752862a72b   caddy:2.9      "caddy run --config …"   8 seconds ago   Up 7 seconds            80/tcp, 443/tcp, 2019/tcp, 443/udp   caddy

$ TEST_SOCKET=world.sock docker compose run --rm test docker network ls
Error response from daemon: Forbidden

# Any user connecting through the socket is permitted:
$ TEST_SOCKET=world.sock docker compose run --rm --user 1337:1337 test docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED                  STATUS                  PORTS                                NAMES
1e690fa26ce8   example-test   "docker ps"              Less than a second ago   Up Less than a second                                        example-test-run-f1e142d6342f
4f752862a72b   caddy:2.9      "caddy run --config …"   9 seconds ago            Up 8 seconds            80/tcp, 443/tcp, 2019/tcp, 443/udp   caddy


# Testing access from the host (as root):
$ DOCKER_HOST=unix:///tmp/sockets/example.sock docker image ls
Error response from daemon: Forbidden

$ DOCKER_HOST=unix:///tmp/sockets/example.sock docker exec -it caddy ps
PID   USER     TIME  COMMAND
    1 root      0:00 caddy run --config /etc/caddy/Caddyfile --adapter caddyfile
   17 root      0:00 ps

polarathene avatar Mar 18 '25 23:03 polarathene

Are the proposed changes only related to affecting interaction with the admin endpoint?

The proposed changes will change a detail of how one uses the caddy admin api through a unix socket. No other socket functionality should be affected.

Regarding your setup: I unfortunately can't afford to work through all of this in the context of a github issue. It would take a lot of time. From a superficial look, it seems like you are trying to add granular access control to a docker-daemon socket by proxying it through caddy and denying certain requests. Then you intend to expose a subset of the docker daemons functionality to clients by giving them access to a specific "restricted" socket that caddy listens on.

So you are working around dockers fundamenal lack of access control. I have thought about this exact approach myself in the past and want to leave two pieces of advice:

  • Getting permission systems like this correct is very hard. If you really need to proceed with docker, consider joining forces with other people, for example the folks at https://github.com/Tecnativa/docker-socket-proxy. This is not an endorsement, I haven't thoroughly tested this project, but it will probably leave you (and everyone else) off better than if everybody tries to "re-invent the wheel".

  • Consider switching to Kubernetes. Role based access control is a first class citizen there, and contrary to popular believe, it is perfectly viable to run single-node kubernetes, especially with distributions like K3s.

Other than that, let's keep the discussion about caddy here, we are digressing quite strongly. :)

NiklasBeierl avatar Mar 20 '25 14:03 NiklasBeierl

Off-topic

So you are working around dockers fundamental lack of access control.

Is it really any different with the Caddy admin endpoint via a unix socket?

The Docker Engine API can also be shared via other means than a unix socket, much like the admin endpoint in Caddy.

  • Getting permission systems like this correct is very hard. If you really need to proceed with docker, consider joining forces with other people, for example the folks at https://github.com/Tecnativa/docker-socket-proxy.

I actually put my solution together because I was not happy with the one you just linked 😅

I put plenty of time into mine and I'm fairly confident with it. Caddy has all the functionality to use a single request matcher with CEL enabling very flexible access control via ENV.

  • This is not an endorsement, I haven't thoroughly tested this project, but it will probably leave you (and everyone else) off better than if everybody tries to "re-invent the wheel".

There's been various concerns with that one.

There's a few others out there, I chose Caddy as a solution to share with others since someone won't need to rely on me to provide a container image, they can just add the two snippets for a Caddyfile and apply those if they like. Should be easier to trust than if I wrote my own little program.


Consider switching to Kubernetes.

Eventually I will, but there are plenty of users in communities like r/selfhosted that won't but would be better off with an easy to use solution that adds more security than exposing direct access to the socket, and not have the various maintenance issues that the linked docker-socket-proxy has experienced.

Other than that, let's keep the discussion about caddy here, we are digressing quite strongly. :)

👍 No need to respond to this comment. I just wanted to double check against the statements of yours I quoted (local processes and separate unix sockets comparing to loopback + ports), since I wasn't sure if they were regarding unix sockets in general or just the admin endpoint in Caddy.

My two examples were shared with the intention of adding context for other use cases:

  • The socket proxy with multiple sockets locally isolated across processes via containers seemed like a good one. It wasn't so much to go into depth discussion on the technical details there 😅 (I just explained how it worked since the raw CEL matcher is a bit complex)
  • The other one with hostnames to affect routing behaviour for requests to the same unix socket. That is more similar to the origins for the admin endpoint discussed.

On-topic

If you had multiple Caddy instances and they could all bind to the same unix socket (~~I haven't checked~~, but I have seen Caddy support this for TCP with the same port in use), then you could choose to have different origins set and only expect the correct one to respond?

EDIT: Nope that doesn't work, TCP sockets is a random caddy instance, unix socket is whichever caddy instance last wrote the socket file probably due to different inode.

If I bind the same unix socket for the admin endpoint to a site-block with site-address http://hello.world.internal, I would receive a 403 (Forbidden). So presently you cannot share the admin unix socket either (which is fine).

Given these observations, if the admin endpoint would still behave the same as described.. I don't see much point to the validation on origins with a unix socket? Permission to the socket itself is trust.

polarathene avatar Mar 20 '25 20:03 polarathene

I will probably be lifting that check/requirement on UDS then.

mholt avatar Mar 20 '25 21:03 mholt