buildx
buildx copied to clipboard
network mode "custom_network" not supported by buildkit
- related https://github.com/moby/buildkit/issues/978
Background: Running a simple integration test fails with network option:
docker network create custom_network
docker run -d --network custom_network --name mongo mongo:3.6
docker buildx build --network custom_network --target=test .
Output:
network mode "custom_network" not supported by buildkit
Still not supported? Code related: https://github.com/docker/buildx/blob/master/build/build.go#L462-L463
Sorry, I'm not sure if we will ever start supporting this as it makes the build dependant on the configuration of a specific node and limits the build to a single node.
Sorry, I'm not sure if we will ever start supporting this as it makes the build dependant on the configuration of a specific node and limits the build to a single node.
That horse has bolted - SSH mount makes the build dependent upon the configuration of a single node - where did that dogma even get started?
That horse has bolted - SSH mount makes the build dependent upon the configuration of a single node
No, it does not. You can forward your ssh agent against any node or a cluster of nodes in buildx. Not really different than just using private images.
That horse has bolted - SSH mount makes the build dependent upon the configuration of a single node
No, it does not. You can forward your ssh agent against any node or a cluster of nodes in buildx. Not really different than just using private images.
Why would someone do that? ssh-agent is a something that needs to be fairly well locked down - why would someone forward it across an insecure connection?
I mean, that's a tangent anyway. Being able to run integration-tests in a docker build was an incredibly useful feature, one less VM to spin up, and one less iceberg to melt, it's just useful because it's efficient.
It's also great to not have to run nodejs, ruby, etc on the build host but instead just have them as container dependency, if you can do all your tests in a docker build container it's one less thing to lock down.
Anyhow, I apologise for running off on a tangent. All I'm saying is, it would be awesome if you could bring that functionality into the latest version of docker along with the means to temporary mount secrets. It's just a really lightweight way to run disposable VMs without touching the host or even giving any rights to run any scripts or anything on the host.
why would someone forward it across an insecure connection?
Why would that connection be insecure? Forwarding agent is more secure than build secrets because your nodes never get access to your keys.
if you can do all your tests in a docker build container it's one less thing to lock down. along with the means to temporary mount secrets
We have solutions for build secrets, privileged execution modes (where you needed docker run
before for more complicated integration tests) and persistent cache for your apt/npm cache etc. https://github.com/moby/buildkit/issues/1337 is implementing sidecar containers support. None of this breaks the portability of the build. And if you really want it, host networking is available for you.
None of this breaks the portability of the build. And if you really want it, host networking is available for you.
But I'd like to spin up a network for each build - and have all the stuff running that would be needed for the integration tests. But again, I have to loop back around and either do weird stuff with iptables, or run postgres on the host and share it with all builds (contention/secrets/writing to the same resources/etc).
You could see how it would be so much more encapsulated and attractive if I could spin up a network per build with a bunch of stub services and tear it down afterwards ?
why would someone forward it across an insecure connection?
Why would that connection be insecure? Forwarding agent is more secure than build secrets because your nodes never get access to your keys.
I'm talking about the socat hack where you forward the socket over TCP - you might have been referring to something else.
https://github.com/moby/buildkit/issues/1337 sounds cool but honestly, given the choice between something that right now works or something that will drop in 2 years time, I know what most of the community would choose.
you might have been referring to something else.
https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066
you might have been referring to something else.
https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066
Nah your secrets and forwarding feature is great - love it. Rocker had secrets support 3 years ago but that project withered on the vine.
The sidecar also sounds great and very clever and well structured. But again, 3 years ago I could build with secrets and talk to network services to run integration tests.
The sidecar also sounds great and very clever and well structured. But again, 3 years ago I could build with secrets and talk to network services to run integration tests.
Also, it does work in compose while build secrets does not.
Adding another use case where specifying the network would be useful: "hermetic builds".
I'm defining a docker network with --internal
that has one other container on the network, a proxy that is providing all the external libraries and files needed for the build. I'd like the docker build to run on this network without access to the external internet, but with access to that proxy.
I can do this with the classic docker build today, or I can create an entire VM with the appropriate network settings, perhaps it would also work if I setup a DinD instance, but it would be useful for buildkit to support this natively.
Adding another use case where specifying the network would be useful: "hermetic builds".
I'm defining a docker network with
--internal
that has one other container on the network, a proxy that is providing all the external libraries and files needed for the build. I'd like the docker build to run on this network without access to the external internet, but with access to that proxy.I can do this with the classic docker build today, or I can create an entire VM with the appropriate network settings, perhaps it would also work if I setup a DinD instance, but it would be useful for buildkit to support this natively.
Good point, I should have mentioned I was doing that too for git dependencies, and... Docker themselves have blogged about using it to augment the docker cache. Now I just burn the network, take lots of coffee breaks, and do my bit to melt the ice caps.
@bryanhuntesl The proxy vars are still supported. For this use case, cache mounts might be a better solution now https://github.com/moby/buildkit/blob/master/frontend/dockerfile/docs/syntax.md#run---mounttypecache
This is particularly needed in environments such as Google Cloud Build where ambient credentials (via special-IP metadata service) are available only on a particular named network, not on the default network, in order to keep their exposure to build steps opt-in.
Any updates on this? I have also checked this issue https://github.com/moby/buildkit/issues/978, but can't find a straight answer. I've disabled buildKit
in the Docker Desktop configuration to be able to build my containers, but I'm guessing that is a workaround. Any progress on this would be appreciated.
The recommendation is to use buildx create --driver-opt network=custom
instead when you absolutely need this capability. The same applies to the google cloud build use case.
Thank you! It seemed like this was a weird use case, but it fits my needs for now. I'll be looking for a better solution, but in the meanwhile I'll use the recommendation.
The recommendation is to use
buildx create --driver-opt network=custom
instead when you absolutely need this capability. The same applies to the google cloud build use case.
Anyone have a working example of this in Github Actions? Not working for me.
Run docker/setup-buildx-action@v1
with:
install: true
buildkitd-flags: --debug
driver-opts: network=custom-network
driver: docker-container
use: true
env:
DOCKER_CLI_EXPERIMENTAL: enabled
Docker info
Creating a new builder instance
/usr/bin/docker buildx create --name builder-3eaacab9-d53e-490c-9020-xxx --driver docker-container --driver-opt network=custom-network --buildkitd-flags --debug --use
builder-3eaacab9-d53e-490c-9020-bae1d022b444
Booting builder
Setting buildx as default builder
Inspect builder
BuildKit version
moby/buildkit:buildx-stable-1 => buildkitd github.com/moby/buildkit v0.9.3 8d2625494a6a3d413e3d875a2ff7xxx
Build
/usr/bin/docker build -f Dockerfile -t my_app:latest --network custom-network --target production .
time="2022-01-19T17:00:XYZ" level=warning msg="No output specified for docker-container driver. Build result will only remain in the build cache. To push result image into registry use --push or to load image into docker use --load"
error: network mode "custom-network" not supported by buildkit. You can define a custom network for your builder using the network driver-opt in buildx create.
Error: The process '/usr/bin/docker' failed with exit code 1
@existere https://docs.docker.com/build/drivers/docker-container/#custom-network
@existere https://github.com/docker/buildx/blob/master/docs/reference/buildx_create.md#use-a-custom-network
I don't see how that setup is any different that my configuration. Am I missing something?
Use a custom network $ docker network create foonet $ docker buildx create --name builder --driver docker-container --driver-opt network=foonet --use $ docker buildx inspect --bootstrap $ docker inspect buildx_buildkit_builder0 --format={{.NetworkSettings.Networks}} map[foonet:0xc00018c0c0]
/usr/bin/docker buildx create --name builder-3eaacab9-d53e-490c-9020-xxx --driver docker-container --driver-opt network=custom-network --buildkitd-flags --debug --use
Here's the network create:
/usr/bin/docker network create custom-network
35bb341a1786f50af6b7baf7853ffc46926b62739736e93709e320xxx
/usr/bin/docker run --name my_container --network custom-network
I don't see how that setup is any different that my configuration
You don't pass the custom network name with build commands. Your builder instance is already part of that network.
OK, so once you've got it set up, how do you get name resolution to work? If I have a container foo
that's running on my custom network, and I do docker run --rm --network custom alpine ping -c 1 foo
, it's able to resolve the name foo
. Likewise, if I create a builder with docker buildx create --driver docker-container --driver-opt network=custom --name example --bootstrap
, and then docker exec buildx_buildkit_example0 ping -c 1 foo
, that works. But if I have a Dockerfile with RUN ping -c 1 foo
and then run docker buildx build --builder example .
, I get bad address foo
. If I manually specify the IP address, that works, but hard-coding an IP address into the Dockerfile hardly seems reasonable.
I have the same problem as @philomory. Name resolution doesn't work.
I am using network=cloudbuild
on Google Cloud platform, so I can't hardcode any IP address.
Step #2: #17 3.744 WARNING: Compute Engine Metadata server unavailable on attempt 1 of 5. Reason: [Errno -2] Name or service not known
Step #2: #17 3.750 WARNING: Compute Engine Metadata server unavailable on attempt 2 of 5. Reason: [Errno -2] Name or service not known
Step #2: #17 3.756 WARNING: Compute Engine Metadata server unavailable on attempt 3 of 5. Reason: [Errno -2] Name or service not known
Step #2: #17 3.762 WARNING: Compute Engine Metadata server unavailable on attempt 4 of 5. Reason: [Errno -2] Name or service not known
Step #2: #17 3.768 WARNING: Compute Engine Metadata server unavailable on attempt 5 of 5. Reason: [Errno -2] Name or service not known
Step #2: #17 3.771 WARNING: No project ID could be determined. Consider running `gcloud config set project` or setting the GOOGLE_CLOUD_PROJECT environment variable
Step #2: #17 3.782 WARNING: Compute Engine Metadata server unavailable on attempt 1 of 5. Reason: HTTPConnectionPool(host='metadata.google.internal', port=80): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efc17f85820>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Step #2: #17 3.917 WARNING: Compute Engine Metadata server unavailable on attempt 2 of 5. Reason: HTTPConnectionPool(host='metadata.google.internal', port=80): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efc17f85c40>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Step #2: #17 3.925 WARNING: Compute Engine Metadata server unavailable on attempt 3 of 5. Reason: HTTPConnectionPool(host='metadata.google.internal', port=80): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efc17f860d0>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Step #2: #17 3.934 WARNING: Compute Engine Metadata server unavailable on attempt 4 of 5. Reason: HTTPConnectionPool(host='metadata.google.internal', port=80): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efc17f85af0>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Step #2: #17 3.942 WARNING: Compute Engine Metadata server unavailable on attempt 5 of 5. Reason: HTTPConnectionPool(host='metadata.google.internal', port=80): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7efc17f85880>: Failed to establish a new connection: [Errno -2] Name or service not known'))
Step #2: #17 3.944 WARNING: Failed to retrieve Application Default Credentials: Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable
Builder has been created with the following command:
docker buildx create --driver docker-container --driver-opt network=cloudbuild --name test --use
It seems GCE's metadata server IP is 169.254.169.254
(but I'm not sure if this is always the case), so this worked for me in Google Cloud Build:
docker buildx create --name builder --driver docker-container --driver-opt network=cloudbuild --use
docker buildx build \
--add-host metadata.google.internal:169.254.169.254 \
... \
.
and inside Dockerfile
(or using Cloud Client Libraries which use Application Default Credentials):
RUN curl "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google"
Thanks for the tips @fibbers, it works like a charm. It will do the job until a real fix.
@tonistiigi What's the right way to use the docker run
scenario you describe?
We have solutions for build secrets, privileged execution modes (where you needed
docker run
before for more complicated integration tests) and persistent cache for your apt/npm cache etc. moby/buildkit#1337 is implementing sidecar containers support. None of this breaks the portability of the build. And if you really want it, host networking is available for you.
I'm currently doing something like
# create network and container the build relies on
docker network create echo-server
docker run -d --name echo-server --network echo-server -p 8080:80 ealen/echo-server
# sanity check that the echo server is on the network
docker run --rm --network echo-server curlimages/curl http://echo-server:80
# create the Dockerfile, will need to hit echo-server during the build
cat << EOF > echo-client.docker
FROM curlimages/curl
RUN curl echo-server:80 && echo
EOF
# create the builder using the network from earlier
docker buildx create --name builder-5fa507d2-a5c6-4fb8-8a18-7340b233672e \
--driver docker-container \
--driver-opt network=echo-server \
--buildkitd-flags '--allow-insecure-entitlement security.insecure --allow-insecure-entitlement network.host' \
--use
# run the build, output to docker to sanity check
docker buildx build --file echo-client.docker \
--add-host echo-server:$(docker inspect echo-server | jq '.[0].NetworkSettings.Networks["echo-server"].IPAddress' | tr -d '"\n') \
--tag local/echo-test-buildx \
--output type=docker \
--builder builder-5fa507d2-a5c6-4fb8-8a18-7340b233672e .
Using add-host
like this seems like a dirty hack just to reach another container on the same network. What would be the right way to do this?
I've been seeing similar. You can run the build in a user specified network. But the buildkit container on that network has DNS set to the docker's localhost entry which won't get passed through to nested containers. So the RUN steps within the build don't have that DNS resolution. I'm not sure of the best way to get that to pass through, perhaps a proxy running in the buildkit container that lets DNS get set to the container IP instead of localhost?
If you're trying to access the GCE VM Metadata Server for authentication, it is possible. Apologies for not sharing this earlier. You're welcome to use this builder:
Docker Hub: https://hub.docker.com/r/misorobotics/cloudbuildx Source: https://github.com/MisoRobotics/cloudbuildx
name: misorobotics/cloudbuildx
args: [--tag=myimage, .]
Or misorobotics/cloudbuildx:multiarch
if you want multiarch.
For the curious, this file has the meat of it: https://github.com/MisoRobotics/cloudbuildx/blob/main/entrypoint.sh