dockerContainer should be pinned to a label (group) of hosts
What feature do you want to see added?
pipeline { agent { dockerContainer { image "ubuntu:jammy-20250530" label "linux" } } .... }
Invalid config option "label" for agent type "dockerContainer". Valid config options are [image, connector, credentialsId, dockerHost, remoteFs] @ line 5, column 13.
In a setup where jenkins' can dispatch jobs on a mix of nodes, either windows, linux, etc. The ubuntu images only run on hosts which is backed by Linux & dockerd.
Obviously, using the "label" => "linux" isn't supported. Can I be assured that the dockerContainer would run on any of my Linux hosts in this setup?
Upstream changes
No response
Are you interested in contributing this feature?
No response
In a setup where jenkins' can dispatch jobs on a mix of nodes, either windows, linux, etc. The ubuntu images only run on hosts which is backed by Linux & dockerd.
Obviously, using the "label" => "linux" isn't supported. Can I be assured that the dockerContainer would run on any of my Linux hosts in this setup?
I don't think that you need a label argument to the dockerContainer step, so long as your system configuration or folder configuration specifies a docker label that is allowed to run declarative Pipeline steps. The "specifying a label" documentation says:
Pipeline provides a global option on the Manage Jenkins page and on the Folder level, for specifying which agents (by Label) to use for running Docker-based Pipelines. To enable this option for Docker labels, the Docker Pipeline plugin must be installed.
I assume that the dockerContainer step will honor that declarative Pipeline setting. I haven't tested that, but I assume that is how it would behave.
I've never used the dockerContainer step. The dockerContainer step is marked as experimental in its description. It is not mentioned in the Pipeline steps reference or the general documentation of the Docker plugin.
There are two automated tests of the dockerContainer declarative Pipeline step, but other than the implementation source code and those two tests, I see no mention of that step.
I regularly use the docker step with a remote docker agent that is configured over TCP with an X.509 certificate credential. It works well for my needs. It is described in the Jenkins documentation. I think you should try with the docker step instead of the experimental dockerContainer step.
pipeline { agent { dockerContainer { image "ubuntu:jammy-20250530" label "linux" } } .... }
Without the label "linux", any host would attempt to create a docker container and install the remoting.jar into it, and run the pipeline. However, if there isn't a image available for the platform (say windows), it will bomb out with the following message:
Could not pull image: image operating system "windows" cannot be used on this platform: operating system is not supported for an Windows image on a Linux host, or vice versa for Linux image on a Windows host.
I've been using a mix of docker and setting up a cloud template(s) which I can reference via a "label". It's just that setting up these templates is a pain, especially if you are trying to do it across 15-20 nodes, or making a change to a set of these templates.
The problem with simply using docker as the agent and specifying the respective parameters here is that features such as tools { gradle "..." } isn't working out of the box. It assumes that the container already maps the tools directory into the container, and it's on the same drive (e.g. C:\Jenkins\tools instead of E:\Jenkins\tools)
Generally, the docker works well, but it's kind of awkward to perform a series of docker exec <container-id> ... <command> for every step of the way. In fact, if one come from a puritan kind of view, the jenkins build should spawn the container, and connect to it, install the remoting.jar, run it using Java and execute all the commands inside of the container as it would on any ordinary host.
I like to be able to control the image via the pipeline script, but I realize that allowing one to specify host specific settings like --mount type bind,src=..,dst=... or certain other parameters are very host specific, and thus would require every host in the pool to be configured the exact same way.
Maybe the right answer to this isn't to add to the experimental 'dockerContainer` clause, but to improve the user-experience for configuring 'cloud' agent / templates to be generic or assigned to a set of similar hosts? For example, I configure a "cloud template", then I can associate it with all my Windows hosts that looks exactly the same (either by hostname / wildcard, labels, etc).