buildkit
buildkit copied to clipboard
Error building windows container inside a windows container; failed to prepare: failed to create scratch layer: no parent layers present: unknown
Hi! First of all, thanks to the team and to @profnandaa for driving this effort! We seem so close to having a robust windows container ecosystem, and oci builds are the last big missing piece. very exciting!
We're trying to move our CI builds of windows containers themselves into Kubernetes, and specifically into other windows containers. I'm not sure this is a supported/possible solution, although we seem to be pretty close to having it working, but we're getting stuck somewhere near buildkit's cacheManager
:
> buildctl build --frontend dockerfile.v0 --no-cache --output type=image,name=test
error: failed to solve: failed to read dockerfile: failed to prepare as t4d63mk4i1ycbfkd2zeigyrqu: failed to create scratch layer: no parent layers present: unknown
Background
On the hosts, we're using EKS-optimized Full windows images, with containerd v1.6.18. Containerd on the host is listening on the standard named pipe at \\.\pipe\containerd-containerd
.
We're running pods which match the following spec, note that we are mounting in the containerd grpc pipe, and also the root of containerd's data folder (although i tried both with and without this):
apiVersion: v1
kind: Pod
metadata:
name: winbuild
namespace: default
spec:
runtimeClassName: win2019-full
containers:
- name: build
image: .../windows-eks-oci-build:latest
command:
- c:/ocitools/buildkitd.exe
env:
- name: CONTAINER_RUNTIME_ENDPOINT
value: 'npipe:////./pipe/containerd-containerd'
volumeMounts:
- name: containerd-data
mountPath: 'c:\ProgramData\containerd'
- name: containerd-pipe
mountPath: '\\.\pipe\containerd-containerd'
volumes:
- name: containerd-data
hostPath:
path: 'c:\ProgramData\containerd'
- name: containerd-pipe
hostPath:
path: '\\.\pipe\containerd-containerd'
We can shell into the pod; once inside, ctr
works and can list and run containers on the host. run
only works with --detach
though - seemingly because the stdio pipes are created on the host:
PS C:\work> ctr -n k8s.io c ls | select -first 2
CONTAINER IMAGE RUNTIME
18a5d6ab9e90684e0ee3187c64361dba5e5f4a035ddcd0ba942b0cdf62b51084 amazonaws.com/eks/pause-windows:latest io.containerd.runhcs.v1
Invocation
We make the simplest possible Dockerfile
inside the container:
PS C:\work> Write-Output 'FROM mcr.microsoft.com/windows/servercore:ltsc2019' > Dockerfile
When we invoke buildctl build
we get the above failure:
PS C:\work> buildctl --debug build --frontend dockerfile.v0 --no-cache --output type=image,name=test
[+] Building 0.0s (0/1)
time="2024-04-05T16:02:35Z" level=debug msg="serving grpc connection" spanID=e1d9c515915e5cba traceID=4095134fd8e3d79f4cf9370d6b4b5539
[+] Building 0.1s (1/1) FINISHED
=> ERROR [internal] load build definition from Dockerfile 0.0s
------
> [internal] load build definition from Dockerfile:
------
error: failed to solve: failed to read dockerfile: failed to prepare as k5yc2qyklashnmvyssii9ukgu: failed to create scratch layer: no parent layers present: unknown
21296 v0.13.1 c:/ocitools/buildkitd.exe --debug --trace --debugaddr=127.0.0.1:6060
github.com/moby/buildkit/cache.(*cacheManager).New
/src/cache/manager.go:626
github.com/moby/buildkit/source/local.(*localSourceHandler).snapshot
/src/source/local/source.go:196
github.com/moby/buildkit/source/local.(*localSourceHandler).Snapshot
/src/source/local/source.go:153
github.com/moby/buildkit/solver/llbsolver/ops.(*SourceOp).Exec
/src/solver/llbsolver/ops/source.go:108
github.com/moby/buildkit/solver.(*sharedOp).Exec.func2
/src/solver/jobs.go:975
github.com/moby/buildkit/util/flightcontrol.(*call[...]).run
/src/util/flightcontrol/flightcontrol.go:121
sync.(*Once).doSlow
/usr/local/go/src/sync/once.go:74
sync.(*Once).Do
/usr/local/go/src/sync/once.go:65
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1650
21296 v0.13.1 c:/ocitools/buildkitd.exe --debug --trace --debugaddr=127.0.0.1:6060
github.com/moby/buildkit/solver.(*edge).execOp
/src/solver/edge.go:937
github.com/moby/buildkit/solver/internal/pipe.NewWithFunction.func2
/src/solver/internal/pipe/pipe.go:82
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1650
21296 v0.13.1 c:/ocitools/buildkitd.exe --debug --trace --debugaddr=127.0.0.1:6060
github.com/moby/buildkit/frontend/dockerui.(*Client).ReadEntrypoint
/src/frontend/dockerui/config.go:368
github.com/moby/buildkit/frontend/dockerfile/builder.Build
/src/frontend/dockerfile/builder/build.go:45
github.com/moby/buildkit/frontend/gateway/forwarder.(*GatewayForwarder).Solve
/src/frontend/gateway/forwarder/frontend.go:36
github.com/moby/buildkit/solver/llbsolver.(*provenanceBridge).Solve
/src/solver/llbsolver/provenance.go:175
github.com/moby/buildkit/frontend/gateway.(*llbBridgeForwarder).Solve
/src/frontend/gateway/gateway.go:715
github.com/moby/buildkit/control/gateway.(*GatewayForwarder).Solve
/src/control/gateway/gateway.go:114
github.com/moby/buildkit/frontend/gateway/pb._LLBBridge_Solve_Handler.func1
/src/frontend/gateway/pb/gateway.pb.go:3333
main.main.func3.ChainUnaryServer.func2.1.1
/src/vendor/github.com/grpc-ecosystem/go-grpc-middleware/chain.go:25
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryServerInterceptor.func1
/src/vendor/go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc/interceptor.go:326
main.unaryInterceptor.func1
/src/cmd/buildkitd/main.go:686
main.main.func3.ChainUnaryServer.func2.1.1
/src/vendor/github.com/grpc-ecosystem/go-grpc-middleware/chain.go:25
main.main.func3.ChainUnaryServer.func2
/src/vendor/github.com/grpc-ecosystem/go-grpc-middleware/chain.go:34
github.com/moby/buildkit/frontend/gateway/pb._LLBBridge_Solve_Handler
/src/frontend/gateway/pb/gateway.pb.go:3335
google.golang.org/grpc.(*Server).processUnaryRPC
/src/vendor/google.golang.org/grpc/server.go:1343
google.golang.org/grpc.(*Server).handleStream
/src/vendor/google.golang.org/grpc/server.go:1737
google.golang.org/grpc.(*Server).serveStreams.func1.1
/src/vendor/google.golang.org/grpc/server.go:986
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1650
25268 v0.13.1 c:\ocitools\buildctl.exe --debug build --frontend dockerfile.v0 --no-cache --output type=image,name=test
google.golang.org/grpc.getChainUnaryInvoker.func1
/src/vendor/google.golang.org/grpc/clientconn.go:519
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryClientInterceptor.func1
/src/vendor/go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc/interceptor.go:110
github.com/moby/buildkit/client.New.filterInterceptor.func5
/src/client/client.go:387
google.golang.org/grpc.DialContext.chainUnaryClientInterceptors.func3
/src/vendor/google.golang.org/grpc/clientconn.go:507
google.golang.org/grpc.(*ClientConn).Invoke
/src/vendor/google.golang.org/grpc/call.go:35
github.com/moby/buildkit/frontend/gateway/pb.(*lLBBridgeClient).Solve
/src/frontend/gateway/pb/gateway.pb.go:3078
github.com/moby/buildkit/client.(*gatewayClientForBuild).Solve
/src/client/build.go:94
github.com/moby/buildkit/frontend/gateway/grpcclient.(*grpcClient).Solve
/src/frontend/gateway/grpcclient/client.go:415
main.buildAction.func5.2
/src/cmd/buildctl/build.go:378
github.com/moby/buildkit/frontend/gateway/grpcclient.(*grpcClient).Run
/src/frontend/gateway/grpcclient/client.go:218
github.com/moby/buildkit/client.(*Client).Build.func2
/src/client/build.go:59
github.com/moby/buildkit/client.(*Client).solve.func3
/src/client/solve.go:300
golang.org/x/sync/errgroup.(*Group).Go.func1
/src/vendor/golang.org/x/sync/errgroup/errgroup.go:75
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1650
21296 v0.13.1 c:/ocitools/buildkitd.exe --debug --trace --debugaddr=127.0.0.1:6060
github.com/moby/buildkit/frontend/gateway.(*llbBridgeForwarder).Return
/src/frontend/gateway/gateway.go:975
github.com/moby/buildkit/control/gateway.(*GatewayForwarder).Return
/src/control/gateway/gateway.go:146
github.com/moby/buildkit/frontend/gateway/pb._LLBBridge_Return_Handler.func1
/src/frontend/gateway/pb/gateway.pb.go:3441
main.main.func3.ChainUnaryServer.func2.1.1
/src/vendor/github.com/grpc-ecosystem/go-grpc-middleware/chain.go:25
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryServerInterceptor.func1
/src/vendor/go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc/interceptor.go:326
main.unaryInterceptor.func1
/src/cmd/buildkitd/main.go:686
main.main.func3.ChainUnaryServer.func2.1.1
/src/vendor/github.com/grpc-ecosystem/go-grpc-middleware/chain.go:25
main.main.func3.ChainUnaryServer.func2
/src/vendor/github.com/grpc-ecosystem/go-grpc-middleware/chain.go:34
github.com/moby/buildkit/frontend/gateway/pb._LLBBridge_Return_Handler
/src/frontend/gateway/pb/gateway.pb.go:3443
google.golang.org/grpc.(*Server).processUnaryRPC
/src/vendor/google.golang.org/grpc/server.go:1343
google.golang.org/grpc.(*Server).handleStream
/src/vendor/google.golang.org/grpc/server.go:1737
google.golang.org/grpc.(*Server).serveStreams.func1.1
/src/vendor/google.golang.org/grpc/server.go:986
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1650
25268 v0.13.1 c:\ocitools\buildctl.exe --debug build --frontend dockerfile.v0 --no-cache --output type=image,name=test
google.golang.org/grpc.getChainUnaryInvoker.func1
/src/vendor/google.golang.org/grpc/clientconn.go:519
go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc.UnaryClientInterceptor.func1
/src/vendor/go.opentelemetry.io/contrib/instrumentation/google.golang.org/grpc/otelgrpc/interceptor.go:110
github.com/moby/buildkit/client.New.filterInterceptor.func5
/src/client/client.go:387
google.golang.org/grpc.DialContext.chainUnaryClientInterceptors.func3
/src/vendor/google.golang.org/grpc/clientconn.go:507
google.golang.org/grpc.(*ClientConn).Invoke
/src/vendor/google.golang.org/grpc/call.go:35
github.com/moby/buildkit/api/services/control.(*controlClient).Solve
/src/api/services/control/control.pb.go:2234
github.com/moby/buildkit/client.(*Client).solve.func2
/src/client/solve.go:274
golang.org/x/sync/errgroup.(*Group).Go.func1
/src/vendor/golang.org/x/sync/errgroup/errgroup.go:75
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1650
25268 v0.13.1 c:\ocitools\buildctl.exe --debug build --frontend dockerfile.v0 --no-cache --output type=image,name=test
github.com/moby/buildkit/client.(*Client).solve.func2
/src/client/solve.go:290
golang.org/x/sync/errgroup.(*Group).Go.func1
/src/vendor/golang.org/x/sync/errgroup/errgroup.go:75
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1650
In addition to the same thread dump as above, the buildkitd.exe
logs contain:
time="2024-04-05T16:12:16Z" level=debug msg="debug handlers listening at 127.0.0.1:6060"
time="2024-04-05T16:12:17Z" level=warning msg="using null network as the default"
time="2024-04-05T16:12:17Z" level=debug msg="remote introspection plugin filters" filters="[type==io.containerd.runtime.v1 type==io.containerd.runtime.v2]"
time="2024-04-05T16:12:17Z" level=warning msg="git source cannot be enabled: failed to find git binary: exec: \"git\": executable file not found in %PATH%"
time="2024-04-05T16:12:17Z" level=info msg="found worker \"8eeiij92s2wyhcfs327rt3t1f\", labels=map[org.mobyproject.buildkit.worker.containerd.namespace:buildkit org.mobyproject.buildkit.worker.containerd.uuid:5baef6d0-3281-468b-8aee-e37da842170b org.mobyproject.buildkit.worker.executor:containerd org.mobyproject.buildkit.worker.hostname:winbuild org.mobyproject.buildkit.worker.network: org.mobyproject.buildkit.worker.selinux.enabled:false org.mobyproject.buildkit.worker.snapshotter:windows], platforms=[windows/amd64]"
time="2024-04-05T16:12:17Z" level=info msg="found 1 workers, default=\"8eeiij92s2wyhcfs327rt3t1f\""
time="2024-04-05T16:12:17Z" level=warning msg="currently, only the default worker can be used."
time="2024-04-05T16:12:17Z" level=info msg="running server on //./pipe/buildkitd"
time="2024-04-05T16:13:23Z" level=debug msg="session started" spanID=bd2e548eb698bed0 traceID=c3d86aaec54d9f2ecb42a2481294da33
time="2024-04-05T16:13:23Z" level=trace msg="cache manager" cache_manager=local deps="]" digest="random:3a21d9a0c90497fd2a1f448985f1ec6f96bc9e25097bff522256d4f7b5ef1d3b" error="<nil>" input=0 op=query output=0 return_cachekeys="]" stack="goroutine 100 [running]:\nruntime/debug.Stack()\n\t/usr/local/go/src/runtime/debug/stack.go:24 +0x5e\ngithub.com/moby/buildkit/util/bklog.TraceLevelOnlyStack(...)\n\t/src/util/bklog/log.go:72\ngithub.com/moby/buildkit/solver.(*cacheManager).Query(0xc0003f0230, {0x0, 0x0, 0x18721db?}, 0x0, {0xc0002fa230, 0x47}, 0x0)\n\t/src/solver/cachemanager.go:71 +0x545\ngithub.com/moby/buildkit/solver.(*edge).processUpdate(0xc0004b6140, {0x28009e0, 0xc000098280})\n\t/src/solver/edge.go:433 +0xcf1\ngithub.com/moby/buildkit/solver.(*edge).unpark(0xc0004b6140, {0xc0003f4090, 0x1, 0x1}, {0xc0003f40b0?, 0x1, 0x1}, {0xc0003f40a0, 0x1, 0x1}, ...)\n\t/src/solver/edge.go:360 +0x85\ngithub.com/moby/buildkit/solver.(*scheduler).dispatch(0xc0004d67e0, 0xc0004b6140)\n\t/src/solver/scheduler.go:150 +0x46a\ngithub.com/moby/buildkit/solver.(*scheduler).loop(0xc0004d67e0)\n\t/src/solver/scheduler.go:118 +0x1fd\ncreated by github.com/moby/buildkit/solver.newScheduler in goroutine 1\n\t/src/solver/scheduler.go:39 +0x22d\n"
time="2024-04-05T16:13:23Z" level=error msg="/moby.buildkit.v1.frontend.LLBBridge/Solve returned error: rpc error: code = Unknown desc = failed to read dockerfile: failed to prepare as 0b1n1yx3a8jux2vl8umj5e60z: failed to create scratch layer: no parent layers present: unknown"
failed to read dockerfile: failed to prepare as 0b1n1yx3a8jux2vl8umj5e60z: failed to create scratch layer: no parent layers present: unknown
Debugging info:
PS C:\work> buildctl debug info
BuildKit: github.com/moby/buildkit v0.13.1 2ae42e0c0c793d7d66b7a23424af6fd6c2f9c8f3
PS C:\work> buildkitd -version
buildkitd github.com/moby/buildkit v0.13.1 2ae42e0c0c793d7d66b7a23424af6fd6c2f9c8f3
PS C:\work> buildctl debug workers -v
ID: i8b8yib66xzfwer07hgdl9hpv
Platforms: windows/amd64
BuildKit: github.com/moby/buildkit v0.13.1 2ae42e0c0c793d7d66b7a23424af6fd6c2f9c8f3
Labels:
org.mobyproject.buildkit.worker.containerd.namespace: buildkit
org.mobyproject.buildkit.worker.containerd.uuid: 5baef6d0-3281-468b-8aee-e37da842170b
org.mobyproject.buildkit.worker.executor: containerd
org.mobyproject.buildkit.worker.hostname: winbuild
org.mobyproject.buildkit.worker.network:
org.mobyproject.buildkit.worker.selinux.enabled: false
org.mobyproject.buildkit.worker.snapshotter: windows
GC Policy rule#0:
All: false
Filters: type==source.local,type==exec.cachemount,type==source.git.checkout
Keep Duration: 48h0m0s
Keep Bytes: 512MB
GC Policy rule#1:
All: false
Keep Duration: 1440h0m0s
Keep Bytes: 2GB
GC Policy rule#2:
All: false
Keep Bytes: 2GB
GC Policy rule#3:
All: true
Keep Bytes: 2GB
I'd like to understand if this solution is simply impossible on windows (and we must use HostProcess pods) or if i can get around it with configuration. Thanks!
fwiw, i also tried this in a HostProcess pod and got the exact same result, which is somewhat surprising since that should be equivalent to running buildkitd.exe directly on the host
(edit) also same issue directly on the host, so i am now not sure this ticket has anything to do with the extra container layer :)
@bnu0 -- thanks for checking this out! I'm going to take a look at this. In the meantime, please lmk the buildkit version you are using, will assume the latest.
Oh, wait, if I get you correctly, you are trying to build a Windows container inside a Windows container? That is not supported currently, from the Windows platform side; by design. Curious, did HostProcess pods scenario work?
It didn't, same error. After filing this issue I tried the HostProcess pod, same issue, and then tried buildctl+buildkitd directly on the kubelet host which also failed with the same error. So I think perhaps this is some issue related to windows 2019, or to sharing a containerd layer cache between k8s (kubelet via cri) and buildkitd?
And yes I was ideally trying to build a container from inside another - but HostProcess would be okay if I could get it to work too, my goal is really To build windows containers using k8s as the orchestrator; doing it similarly to Linux (inside a container) would be awesome but I am not surprised it isn't possible.
And yes I was ideally trying to build a container from inside another - but HostProcess would be okay if I could get it to work too, my goal is really To build windows containers using k8s as the orchestrator; doing it similarly to Linux (inside a container) would be awesome but I am not surprised it isn't possible.
This is a reasonable ask and one we will prioritize. I know that nested containers is something on the backlog of the Windows Containers platform team. I don't have any timelines for now but I'll keep you posted here as soon as I get something concrete.
It didn't, same error. After filing this issue I tried the HostProcess pod, same issue, and then tried buildctl+buildkitd directly on the kubelet host which also failed with the same error. So I think perhaps this is some issue related to windows 2019, or to sharing a containerd layer cache between k8s (kubelet via cri) and buildkitd?
I will repro the scenario with WS2022 and report my findings.
Hey, I have been trying something similar with windows 2022 on AKS.... so far, same results as you.
# ./buildctl --addr tcp://10.0.24.131:1234 build --frontend=dockerfile.v0 --local dockerfile=. --output type=image,name=image:image
[+] Building 0.1s (1/1) FINISHED
=> ERROR [internal] load build definition from Dockerfile 0.0s
------
> [internal] load build definition from Dockerfile:
------
error: failed to solve: failed to read dockerfile: failed to prepare as qkwa59k9desrpu0ce78bi0lre: failed to create scratch layer: no parent layers present: unknown
and on the buildkitd side:
time="2024-04-09T18:08:16Z" level=warning msg="TLS is not enabled for tcp://0.0.0.0:1234. enabling mutual TLS authentication is highly recommended"
time="2024-04-09T18:08:16Z" level=warning msg="using null network as the default"
time="2024-04-09T18:08:16Z" level=warning msg="git source cannot be enabled: failed to find git binary: exec: \"git\": executable file not found in %PATH%"
time="2024-04-09T18:08:16Z" level=info msg="found worker \"efq2xoh6hgsq7hh9cwoebowik\", labels=map[org.mobyproject.buildkit.worker.containerd.namespace:buildkit org.mobyproject.buildkit.worker.containerd.uuid:99b19741-4a1e-4cf9-a983-30e21eef9573 org.mobyproject.buildkit.worker.executor:containerd org.mobyproject.buildkit.worker.hostname:akswp22000002 org.mobyproject.buildkit.worker.network: org.mobyproject.buildkit.worker.selinux.enabled:false org.mobyproject.buildkit.worker.snapshotter:windows], platforms=[windows/amd64]"
time="2024-04-09T18:08:16Z" level=info msg="found 1 workers, default=\"efq2xoh6hgsq7hh9cwoebowik\""
time="2024-04-09T18:08:16Z" level=warning msg="currently, only the default worker can be used."
time="2024-04-09T18:08:16Z" level=info msg="running server on [::]:1234"
time="2024-04-09T18:19:27Z" level=error msg="/moby.buildkit.v1.frontend.LLBBridge/Solve returned error: rpc error: code = Unknown desc = failed to read dockerfile: failed to prepare as mvlaf1ls59fc37wfqmd3tq0my: failed to create scratch layer: no parent layers present: unknown"
containerd v1.6.18.
you need to use containerd v1.7.7 or later
Gonna give this a shot and report back!
@bnu0 wondering if you ever tried this?
I actually tried this before i seen this issue and I was coming up with hcshim:ActivateLayer failed in Win32: The specified module could not be found. (0x7e)
- buildkit daemon seems to be running fine, so im trying to isolate if it's a misconfig from my side or if it's just simply not possible yet.
Well, I got a bit further, I was able to use a HostProcess to be able to run buildkitd and buildctl.
I'm looking to run buildkitd on a daemonset of pods,and I can get it to run, but it seems that I get an access denied when trying to write to C:/Windows/SystemTemp
Curious if you got a similar behavior?
Well, I got a bit further, I was able to use a HostProcess to be able to run buildkitd and buildctl.
yes, you can't run these things isolated. they will have privileges. you could experiment with a hyperv container but it's unlikely to work. for the foreseeable future, on windows, building a container that uses RUN
will need host privileges.
I did! I was able to have buildctl
in a container which mounts the buildkitd named pipe from the host (buildkitd has to run on the actual physical host, not in a HostProcess pod). I was able to build a windows image from inside a windows container this way. We're working on using this configuration to run buildctl inside windows GitLab CI containers. But for now, i think this issue can be closed since containerd 1.6.x was the issue.
Ok, thanks for the clarification, we will do a documentation note on this before closing. There's a still a pending nested containers / DinD feature request we are tracking.