docker-compose-buildkite-plugin
                                
                                 docker-compose-buildkite-plugin copied to clipboard
                                
                                    docker-compose-buildkite-plugin copied to clipboard
                            
                            
                            
                        Support mount-buildkite-agent option
a la https://github.com/buildkite-plugins/docker-buildkite-plugin#mount-buildkite-agent-optional / https://github.com/buildkite-plugins/docker-buildkite-plugin/blob/master/hooks/command#L30-L42
I'm doing this manually in my docker-compose.yml but it'd be much nicer to straight forwardly support this use case out of the box, both to keep docker-compose.yml less coupled to CI details, but also as I imagine this is a pretty common use case, and increasingly so with increased usage of buildkite-agent annotate
For other readers, I'm doing the following in my pipeline.yml now (instead of modifying docker-compose.yml to be Buildkite-specific):
steps:
  - name: 'Example step'
    command: echo Hello World
    agents:
      queue: '*'
    plugins:
      docker-compose#v2.4.1:
        run: tests
        config: .buildkite/docker-compose.yml
        volumes:
          - ${BUILDKITE_AGENT_BINARY_PATH:-/usr/bin/buildkite-agent}:/usr/bin/buildkite-agent
        environment:
          - BUILDKITE_JOB_ID
          - BUILDKITE_BUILD_ID
          - BUILDKITE_AGENT_ACCESS_TOKEN
Having this automatically added would be nicer as I have a lot of steps and so a lot of undesirable repetition (though I may post-process a simpler pipeline.yml to solve this).
NOTE: This does not work if you're using the agent-socket experiment, as it doesn't have access to BUILDKITE_AGENT_ACCESS_TOKEN. I think this could be accomplished by inspecting BUILDKITE_AGENT_ENDPOINT to determine the socket path to mount in (with volumes:) and passing that variable through.
We didn’t do this initially, but I think it’s probably a really sensible default that you could disable if it causes problems.
Or just have it off by default? I think once the agent-socket experiment is live, perhaps it could be on by default via a socket. That'd be pretty sweet and "safe"
I believe the agent-socket still needs the token passed through, so I’m not sure if it’s any more safe? I think there’s a plan to remove that requirement though, or change the token? @lox knows more!
If it’s a safety thing, we should probably default it in both plugins to off? But I’m not sure, I feel that might be overly cautious.
I reckon we could do a major version bump now, and set it on by default.
Currently, the agent-socket experiment needs BUILDKITE_AGENT_ENDPOINT and BUILDKITE_AGENT_ACCESS_TOKEN. I'm considering changing this to leave those env alone and instead add a new env which is BUILDKITE_AGENT_SOCKET to more closely follow the ssh-agent model.
One downside to automatically trying to mount the buildkite-agent binary is that (like mounting the working dir), it falls apart quickly in things like kubernetes.
Currently, the agent-socket experiment needs
BUILDKITE_AGENT_ENDPOINTandBUILDKITE_AGENT_ACCESS_TOKEN
If I read the agent code correctly, the agent that starts the socket does, but the agent that consumes it only needs the former, no? Or at least, that seemed like the point; unauthenticated to a local socket, another process which "closes over" the auth then proxies to the real endpoint. Or, did I miss the point entirely of that feature?
One downside to automatically trying to mount the buildkite-agent binary is that (like mounting the working dir), it falls apart quickly in things like kubernetes.
I don't know anything about Kubernetes to chime in about what to do there, but I am super curious as to what exactly is different there.
The local socket is actually authenticated too, which might be overkill:
https://github.com/buildkite/agent/blob/c36e49a5c477509c4e7db31edea0b7cdd3f342cc/agent/api_proxy.go#L79
I reckon keeping it simple and having an unauthenticated BUILDKITE_AGENT_SOCKET is probably fine, given they only exist for the duration of the job.
I don't know anything about Kubernetes to chime in about what to do there, but I am super curious as to what exactly is different there.
If you have the agent running in a docker container rather than on the host but with access to the docker socket things get messy. Host volume mounts are relative to the host, so even though in your container /usr/local/bin/buildkite-agent exists, trying to mount it with docker mounts an empty file, because it doesn't exist on the host. Does that make sense?
Does that make sense?
Yup I see. Messy indeed...
The only way I can think to solve it is to have something like https://github.com/buildkite/sockguard rewriting certain paths.
One, potentially hacky|elegant, solution to this issue would be to use old-school data containers to provide a data volume containing the buildkite-agent binary. The docker-compose plugin could then use the override.yml to specify a volumes_from: directive for the run container pointing to the data container so that the agent is available in the command environment.
This has the drawback of potentially requiring a path modification within the target container to pickup the agent binary, as I'm not sure if you can use data containers to bind-mount a single file, but it would be compatible across a variety of docker-within-agent configurations.
If buildkite provided a set of standard buildkite-agent-bin data containers on dockerhub for every release of the buildkite agent it would be relatively simple to have the docker-compose plugin generate an override.yml matching the current agent version, instead of needing to copy the current agent binary.
As a minimal example:
+fordas@mako:~/workspace/data_volume_test$ cat docker-compose.yml
version: '2.3'
services:
  test:
    image: ubuntu:latest
    entrypoint: /buildkite/bin/buildkite-agent --help
+fordas@mako:~/workspace/data_volume_test$ cat docker-compose.override.yml
version: '2.3'
services:
  test:
    volumes:
     - buildkite-bin:/buildkite/bin
    volumes_from:
     - buildkite-bin
  buildkite-bin:
    build:
      context: .
      dockerfile: buildkite-agent.Dockerfile
    volumes:
     - buildkite-bin:/buildkite/bin
volumes:
  buildkite-bin:
+fordas@mako:~/workspace/data_volume_test$ cat buildkite-agent-bin.Dockerfile
FROM alpine:3.7
RUN apk add --no-cache \
      bash \
      curl
RUN mkdir -p /buildkite/bin
RUN curl -L https://github.com/buildkite/agent/releases/download/v3.2.0/buildkite-agent-linux-amd64-3.2.0.tar.gz | \
    tar -xz -C /usr/local/bin ./buildkite-agent && \
    chmod +x /usr/local/bin/buildkite-agent && \
    cp /usr/local/bin/buildkite-agent /buildkite/bin
VOLUME /buildkite
+fordas@mako:~/workspace/data_volume_test$ docker-compose up test && docker-compose down
Creating network "data_volume_test_default" with the default driver
Creating data_volume_test_buildkite-bin_1 ... done
Creating data_volume_test_test_1          ... done
Attaching to data_volume_test_test_1
test_1           | Usage:
test_1           |
test_1           |   buildkite-agent <command> [arguments...]
test_1           |
test_1           | Available commands are:
test_1           |
test_1           |   start      Starts a Buildkite agent
test_1           |   annotate   Annotate the build page within the Buildkite UI with text from within a Buildkite job
test_1           |   artifact   Upload/download artifacts from Buildkite jobs
test_1           |   meta-data  Get/set data from Buildkite jobs
test_1           |   pipeline   Make changes to the pipeline of the currently running build
test_1           |   bootstrap  Run a Buildkite job locally
test_1           |   help       Shows a list of commands or help for one command
test_1           |
test_1           | Use "buildkite-agent <command> --help" for more information about a command.
test_1           |
data_volume_test_test_1 exited with code 0
Removing data_volume_test_test_1          ... done
Removing data_volume_test_buildkite-bin_1 ... done
Removing network data_volume_test_default
Unfortunately volumes_from is deprecated and removed in v3 and upwards of the docker-compose.yml format 😭
Yes, the v3 compose file syntax is a bit of a bear.
I believe the same general concept works by directly specifying the service dependency and volume mount in the compose override:
https://gist.github.com/asford/fc72f7450d2ae464863d6fa98d14b7d0
For other readers, I'm doing the following in my
pipeline.ymlnow (instead of modifyingdocker-compose.ymlto be Buildkite-specific):steps: - name: 'Example step' command: echo Hello World agents: queue: '*' plugins: docker-compose#v2.4.1: run: tests config: .buildkite/docker-compose.yml volumes: - ${BUILDKITE_AGENT_BINARY_PATH:-/usr/bin/buildkite-agent}:/usr/bin/buildkite-agent environment: - BUILDKITE_JOB_ID - BUILDKITE_BUILD_ID - BUILDKITE_AGENT_ACCESS_TOKENHaving this automatically added would be nicer as I have a lot of steps and so a lot of undesirable repetition (though I may post-process a simpler
pipeline.ymlto solve this).NOTE: This does not work if you're using the
agent-socketexperiment, as it doesn't have access toBUILDKITE_AGENT_ACCESS_TOKEN. I think this could be accomplished by inspectingBUILDKITE_AGENT_ENDPOINTto determine the socket path to mount in (withvolumes:) and passing that variable through.
had no idea how to pass Buildkite-agent through to my docker container - thank you for defining these steps (even if it's not as automatic as you'd like)
would love to see this documented. HOW TO PASS Buildkite Agent TO DOCKER-CONTAINER:
Some time has passed since the original discussion around this issue - has anything changed that might enable a nicer solution to this? I tried @bjeanes' workaround in the inline step builder and got an error message saying that "BUILDKITE_AGENT_BINARY_PATH couldn't be interpolated".
We'd be open to revisiting this, and just providing an option to link through the buildkite-agent binary like we do on the Docker plugin, or mounting it via a data container.
Unfortunately @chrislloyd you won't be able to use that environment variable in the inline step builder — that's an agent run-time environment variable, because we wouldn't know the path to the binary on the agent machine at that stage. If you know the path to the agent binary though, you could hardcode the path instead of using the environment variable?
here we copied it over from how the docker-buildkite-plugin does it https://github.com/buildkite-plugins/docker-buildkite-plugin/blob/master/hooks/command#L210-L230 our version: https://github.com/buildkite-plugins/docker-compose-buildkite-plugin/compare/master...FarmWise:v3.5.0-2
As mentioned, PR #285 took care of this :)