jib
jib copied to clipboard
Unrecognized field "LayerSources" error when processing an image in Windows Containers
- Jib version: 2.0.1-SNAPSHOT (84933d528b4e4d445d88686e7c7db3945d6c896f)
- Build tool: Gradle 5.6.2
- OS: Windows 10
Description of the issue: There is following error when trying to build an image on Windows Containers:
Execution failed for task ':jibDockerBuild'.
> com.google.cloud.tools.jib.plugins.common.BuildStepsExecutionException: Unrecognized field "LayerSources" (class com.google.cloud.tools.jib.docker.json.DockerManifestEntryTemplate), not marked as ignorable (3 known properties: "config", "repoTags", "layers"])
at [Source: (sun.nio.ch.ChannelInputStream); line: 1, column: 1015] (through reference chain: java.lang.Object[][0]->com.google.cloud.tools.jib.docker.json.DockerManifestEntryTemplate["LayerSources"])
Expected behavior: No error.
Steps to reproduce: 0. Be on Windows machine with Docker installed, and switched to Windows Containers mode.
- Build jib from
master
as decribed here -
docker pull openjdk:11.0.1-windowsservercore-ltsc2016
(to load the image to the cache) - Use
build.gradle
as shown below. - Execute
jibDockerBuild
.
jib-gradle-plugin
Configuration:
jib {
from {
image = 'docker://openjdk:11.0.1-windowsservercore-ltsc2016'
}
}
Log output:
...
Executing tasks:
[=========== ] 37.5% complete
> saving base image openjdk:11.0.1-windowsserve...
Executing tasks:
[=========== ] 37.5% complete
> processing base image openjdk:11.0.1-windowss...
> Task :jibDockerBuild FAILED
7 actionable tasks: 1 executed, 6 up-to-date
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':jibDockerBuild'.
> com.google.cloud.tools.jib.plugins.common.BuildStepsExecutionException: Unrecognized field "LayerSources" (class com.google.cloud.tools.jib.docker.json.DockerManifestEntryTemplate), not marked as ignorable (3 known properties: "config", "repoTags", "layers"])
at [Source: (sun.nio.ch.ChannelInputStream); line: 1, column: 1015] (through reference chain: java.lang.Object[][0]->com.google.cloud.tools.jib.docker.json.DockerManifestEntryTemplate["LayerSources"])
* Try:
Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Exception is:
org.gradle.api.tasks.TaskExecutionException: Execution failed for task ':jibDockerBuild'.
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$3.accept(ExecuteActionsTaskExecuter.java:166)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$3.accept(ExecuteActionsTaskExecuter.java:163)
at org.gradle.internal.Try$Failure.ifSuccessfulOrElse(Try.java:191)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:156)
at org.gradle.api.internal.tasks.execution.ValidatingTaskExecuter.execute(ValidatingTaskExecuter.java:62)
at org.gradle.api.internal.tasks.execution.SkipEmptySourceFilesTaskExecuter.execute(SkipEmptySourceFilesTaskExecuter.java:108)
at org.gradle.api.internal.tasks.execution.ResolveBeforeExecutionOutputsTaskExecuter.execute(ResolveBeforeExecutionOutputsTaskExecuter.java:67)
at org.gradle.api.internal.tasks.execution.ResolveAfterPreviousExecutionStateTaskExecuter.execute(ResolveAfterPreviousExecutionStateTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.CleanupStaleOutputsExecuter.execute(CleanupStaleOutputsExecuter.java:94)
at org.gradle.api.internal.tasks.execution.FinalizePropertiesTaskExecuter.execute(FinalizePropertiesTaskExecuter.java:46)
at org.gradle.api.internal.tasks.execution.ResolveTaskExecutionModeExecuter.execute(ResolveTaskExecutionModeExecuter.java:95)
at org.gradle.api.internal.tasks.execution.SkipTaskWithNoActionsExecuter.execute(SkipTaskWithNoActionsExecuter.java:57)
at org.gradle.api.internal.tasks.execution.SkipOnlyIfTaskExecuter.execute(SkipOnlyIfTaskExecuter.java:56)
at org.gradle.api.internal.tasks.execution.CatchExceptionTaskExecuter.execute(CatchExceptionTaskExecuter.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.executeTask(EventFiringTaskExecuter.java:77)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:55)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter$1.call(EventFiringTaskExecuter.java:52)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:416)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$CallableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:406)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.call(DefaultBuildOperationExecutor.java:102)
at org.gradle.internal.operations.DelegatingBuildOperationExecutor.call(DelegatingBuildOperationExecutor.java:36)
at org.gradle.api.internal.tasks.execution.EventFiringTaskExecuter.execute(EventFiringTaskExecuter.java:52)
at org.gradle.execution.plan.LocalTaskNodeExecutor.execute(LocalTaskNodeExecutor.java:43)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:355)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$InvokeNodeExecutorsAction.execute(DefaultTaskExecutionGraph.java:343)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:336)
at org.gradle.execution.taskgraph.DefaultTaskExecutionGraph$BuildOperationAwareExecutionAction.execute(DefaultTaskExecutionGraph.java:322)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:134)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker$1.execute(DefaultPlanExecutor.java:129)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.execute(DefaultPlanExecutor.java:202)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.executeNextNode(DefaultPlanExecutor.java:193)
at org.gradle.execution.plan.DefaultPlanExecutor$ExecutorWorker.run(DefaultPlanExecutor.java:129)
at org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:64)
at org.gradle.internal.concurrent.ManagedExecutorImpl$1.run(ManagedExecutorImpl.java:48)
at org.gradle.internal.concurrent.ThreadFactoryImpl$ManagedThreadRunnable.run(ThreadFactoryImpl.java:56)
Caused by: org.gradle.internal.UncheckedException: com.google.cloud.tools.jib.plugins.common.BuildStepsExecutionException: Unrecognized field "LayerSources" (class com.google.cloud.tools.jib.docker.json.DockerManifestEntryTemplate), not marked as ignorable (3 known properties: "config", "repoTags", "layers"])
at [Source: (sun.nio.ch.ChannelInputStream); line: 1, column: 1015] (through reference chain: java.lang.Object[][0]->com.google.cloud.tools.jib.docker.json.DockerManifestEntryTemplate["LayerSources"])
at org.gradle.internal.UncheckedException.throwAsUncheckedException(UncheckedException.java:67)
at org.gradle.internal.UncheckedException.throwAsUncheckedException(UncheckedException.java:41)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:106)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.doExecute(StandardTaskAction.java:49)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:42)
at org.gradle.api.internal.project.taskfactory.StandardTaskAction.execute(StandardTaskAction.java:28)
at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:717)
at org.gradle.api.internal.AbstractTask$TaskActionWrapper.execute(AbstractTask.java:684)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$5.run(ExecuteActionsTaskExecuter.java:476)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:402)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$RunnableBuildOperationWorker.execute(DefaultBuildOperationExecutor.java:394)
at org.gradle.internal.operations.DefaultBuildOperationExecutor$1.execute(DefaultBuildOperationExecutor.java:165)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:250)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.execute(DefaultBuildOperationExecutor.java:158)
at org.gradle.internal.operations.DefaultBuildOperationExecutor.run(DefaultBuildOperationExecutor.java:92)
at org.gradle.internal.operations.DelegatingBuildOperationExecutor.run(DelegatingBuildOperationExecutor.java:31)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeAction(ExecuteActionsTaskExecuter.java:461)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.executeActions(ExecuteActionsTaskExecuter.java:444)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.access$200(ExecuteActionsTaskExecuter.java:93)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter$TaskExecution.execute(ExecuteActionsTaskExecuter.java:237)
at org.gradle.internal.execution.steps.ExecuteStep.lambda$execute$1(ExecuteStep.java:33)
at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:33)
at org.gradle.internal.execution.steps.ExecuteStep.execute(ExecuteStep.java:26)
at org.gradle.internal.execution.steps.CleanupOutputsStep.execute(CleanupOutputsStep.java:58)
at org.gradle.internal.execution.steps.CleanupOutputsStep.execute(CleanupOutputsStep.java:35)
at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:48)
at org.gradle.internal.execution.steps.ResolveInputChangesStep.execute(ResolveInputChangesStep.java:33)
at org.gradle.internal.execution.steps.CancelExecutionStep.execute(CancelExecutionStep.java:39)
at org.gradle.internal.execution.steps.TimeoutStep.executeWithoutTimeout(TimeoutStep.java:73)
at org.gradle.internal.execution.steps.TimeoutStep.execute(TimeoutStep.java:54)
at org.gradle.internal.execution.steps.CatchExceptionStep.execute(CatchExceptionStep.java:35)
at org.gradle.internal.execution.steps.CreateOutputsStep.execute(CreateOutputsStep.java:51)
at org.gradle.internal.execution.steps.SnapshotOutputsStep.execute(SnapshotOutputsStep.java:45)
at org.gradle.internal.execution.steps.SnapshotOutputsStep.execute(SnapshotOutputsStep.java:31)
at org.gradle.internal.execution.steps.CacheStep.executeWithoutCache(CacheStep.java:208)
at org.gradle.internal.execution.steps.CacheStep.execute(CacheStep.java:70)
at org.gradle.internal.execution.steps.CacheStep.execute(CacheStep.java:45)
at org.gradle.internal.execution.steps.BroadcastChangingOutputsStep.execute(BroadcastChangingOutputsStep.java:49)
at org.gradle.internal.execution.steps.StoreSnapshotsStep.execute(StoreSnapshotsStep.java:43)
at org.gradle.internal.execution.steps.StoreSnapshotsStep.execute(StoreSnapshotsStep.java:32)
at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:38)
at org.gradle.internal.execution.steps.RecordOutputsStep.execute(RecordOutputsStep.java:24)
at org.gradle.internal.execution.steps.SkipUpToDateStep.executeBecause(SkipUpToDateStep.java:96)
at org.gradle.internal.execution.steps.SkipUpToDateStep.lambda$execute$0(SkipUpToDateStep.java:89)
at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:54)
at org.gradle.internal.execution.steps.SkipUpToDateStep.execute(SkipUpToDateStep.java:38)
at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:76)
at org.gradle.internal.execution.steps.ResolveChangesStep.execute(ResolveChangesStep.java:37)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:36)
at org.gradle.internal.execution.steps.legacy.MarkSnapshottingInputsFinishedStep.execute(MarkSnapshottingInputsFinishedStep.java:26)
at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:90)
at org.gradle.internal.execution.steps.ResolveCachingStateStep.execute(ResolveCachingStateStep.java:48)
at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.execute(CaptureStateBeforeExecutionStep.java:69)
at org.gradle.internal.execution.steps.CaptureStateBeforeExecutionStep.execute(CaptureStateBeforeExecutionStep.java:47)
at org.gradle.internal.execution.impl.DefaultWorkExecutor.execute(DefaultWorkExecutor.java:33)
at org.gradle.api.internal.tasks.execution.ExecuteActionsTaskExecuter.execute(ExecuteActionsTaskExecuter.java:140)
... 34 more
Caused by: com.google.cloud.tools.jib.plugins.common.BuildStepsExecutionException: Unrecognized field "LayerSources" (class com.google.cloud.tools.jib.docker.json.DockerManifestEntryTemplate), not marked as ignorable (3 known properties: "config", "repoTags", "layers"])
at [Source: (sun.nio.ch.ChannelInputStream); line: 1, column: 1015] (through reference chain: java.lang.Object[][0]->com.google.cloud.tools.jib.docker.json.DockerManifestEntryTemplate["LayerSources"])
at com.google.cloud.tools.jib.plugins.common.JibBuildRunner.runBuild(JibBuildRunner.java:283)
at com.google.cloud.tools.jib.gradle.BuildDockerTask.buildDocker(BuildDockerTask.java:105)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at org.gradle.internal.reflect.JavaMethod.invoke(JavaMethod.java:103)
... 87 more
Caused by: com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException: Unrecognized field "LayerSources" (class com.google.cloud.tools.jib.docker.json.DockerManifestEntryTemplate), not marked as ignorable (3 known properties: "config", "repoTags", "layers"])
at [Source: (sun.nio.ch.ChannelInputStream); line: 1, column: 1015] (through reference chain: java.lang.Object[][0]->com.google.cloud.tools.jib.docker.json.DockerManifestEntryTemplate["LayerSources"])
at com.fasterxml.jackson.databind.exc.UnrecognizedPropertyException.from(UnrecognizedPropertyException.java:61)
at com.fasterxml.jackson.databind.DeserializationContext.handleUnknownProperty(DeserializationContext.java:823)
at com.fasterxml.jackson.databind.deser.std.StdDeserializer.handleUnknownProperty(StdDeserializer.java:1153)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownProperty(BeanDeserializerBase.java:1589)
at com.fasterxml.jackson.databind.deser.BeanDeserializerBase.handleUnknownVanilla(BeanDeserializerBase.java:1567)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.vanillaDeserialize(BeanDeserializer.java:294)
at com.fasterxml.jackson.databind.deser.BeanDeserializer.deserialize(BeanDeserializer.java:151)
at com.fasterxml.jackson.databind.deser.std.ObjectArrayDeserializer.deserialize(ObjectArrayDeserializer.java:195)
at com.fasterxml.jackson.databind.deser.std.ObjectArrayDeserializer.deserialize(ObjectArrayDeserializer.java:21)
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:4014)
at com.fasterxml.jackson.databind.ObjectMapper.readValue(ObjectMapper.java:3071)
at com.google.cloud.tools.jib.builder.steps.LocalBaseImageSteps.cacheDockerImageTar(LocalBaseImageSteps.java:209)
at com.google.cloud.tools.jib.builder.steps.LocalBaseImageSteps.lambda$retrieveDockerDaemonLayersStep$0(LocalBaseImageSteps.java:126)
at com.google.common.util.concurrent.TrustedListenableFutureTask$TrustedFutureInterruptibleTask.runInterruptibly(TrustedListenableFutureTask.java:125)
at com.google.common.util.concurrent.InterruptibleTask.run(InterruptibleTask.java:69)
at com.google.common.util.concurrent.TrustedListenableFutureTask.run(TrustedListenableFutureTask.java:78)
Additional Information:
Please note that I'm specifying the from
Docker image with the docker://
prefix, so it's loaded from the Docker daemon cache.
It's also related to #2215 when it is described that it is required to use the Docker daemon cache in order to make it working with Windows Containers (#1568 will allow to use Windows Containers in the standard way).
This issue follows #2270 that solved different problem with Windows Containers.
Looks like a Docker manifest entry JSON has an additional field LayerSources
in this case.
I tried to export the image to a tar file (using docker save
) and the manifest.json
is this:
[
{
"Config": "68204dc2fc12499d321ef43f00c439bbf2300b280d14737a92841d4b22ddd58c.json",
"RepoTags": [
"openjdk:11.0.1-windowsservercore-ltsc2016"
],
"Layers": [
"37c83953227860836879b642445fc27e45d6fe2bd1769735aaec5c8f6e433a83/layer.tar",
"87e7927c27910829d8243d1da61a7c1b2a0afd2d9b646925547d935702cb6327/layer.tar",
"dbb2f979450bf030f6be414c68f02a6a946abe88ebefc82f7dade9867bf21fcf/layer.tar",
"a61d49e4daf1c8b59e415afc31b46961631d5871958b9d25fce050f4315efabb/layer.tar",
"cab147056182abc9913c3994534f1dcf3e0704656da211f88180ec3980db2c31/layer.tar",
"03e7139f1f6fd8bc270feb9bcf39bbd4d6774d35c56d4fc9b307deebed3a3259/layer.tar",
"389e2857fe8de615f4402cb0edac215243221fa550c9e67ab254e935619d7dc6/layer.tar",
"b95acdc928cdf6a7bfe598072315667a4b8b821853ce32a1c9c05078acca89bb/layer.tar",
"aef1765ebaed263151cbe0407992fef4b691962036c76027a955f03dc2d2ac8e/layer.tar",
"5abb47614e023f81c54e5613a20231421b22abd2a55ddbe30e99300ee18d9e9e/layer.tar",
"3e620caa05a0c46aadd58113e05d950351db6595e2c5e7c3553d97cfb7e69e89/layer.tar"
],
"LayerSources": {
"sha256:f358be10862ccbc329638b9e10b3d497dd7cd28b0e8c7931b4a545c88d7f7cd6": {
"mediaType": "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip",
"size": 4069985900,
"digest": "sha256:3889bb8d808bbae6fa5a33e07093e65c31371bcf9e4c38c21be6b9af52ad1548",
"urls": [
"https://go.microsoft.com/fwlink/?linkid=837859"
]
},
"sha256:ffce47ae4ffd0b88677730d9949223ae0f4de9c7b14fd7f23112a1724381dac8": {
"mediaType": "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip",
"size": 1565830172,
"digest": "sha256:d0c71fc8924e632b81de72fba055610c4a5259b2f6723e15f70662f7bc328184",
"urls": [
"https://go.microsoft.com/fwlink/?linkid=2057011"
]
}
}
}
]
Btw. the docker save
takes ~4 minutes to finish, and the resulting tar file has ~11 GB.
So I guess that support for these fields in the parser could move us further.
LayerSources
seems to be specific to exporting Windows containers (https://github.com/moby/moby/pull/22866). There were some changes to google/containerregistry to support this too, though it doesn't seem to be documented officially anywhere (https://github.com/google/containerregistry/blob/da03b395ccdc4e149e34fbb540483efce962dc64/client/v2_2/save_.py#L115-L121)
Would it make sense to create a design doc to suss out what's required to support Windows containers?
I'm also unable to find any official documentation, just source code :)
I'm not sure if a design doc is necessary - it's up to you. For me, it seems that #1568 seems to be the main blocker. Maybe even this issue could be solved as part of #1568
It feels like we're playing whackamole. What do these layer descriptors mean and how should Jib propagate them when pushing an image to a remote registry? What other changes were required in Docker and others?
So I guess that support for these fields in the parser could move us further.
I think we need to check a few things first. Now it seems possible that Docker entirely avoids downloading these "foreign" layers, probably for legal reasons. (That is, these layers don't exist in the local Docker daemon cache.) If that's the case, it's possible that the tar archive from docker save
lacks these foreign layers too (although it might attempt to download these on-the-fly when doing docker save
). And on top of what @briandealwis said, it's possible that Docker avoids pushing these foreign layers to remote registries. If that is the case, we should follow the same.
My experience is that Docker downloads these foreign layers - it needs them to start a container based on that image. If you try to push the pulled image, then the foreign layers are skipped.
Btw. you can set
allow-nondistributable-artifacts
in yourdaemon.json
to add your local registry. It allows to push these foreign layers to the configured registry. This is used e.g. in case of an air-gaped scenario.
@briandealwis I would expect that jib would also read the configuration from the daemon.json
, and either push the foreign layers as a link or as binary.
it needs them to start a container based on that image.
Ah, of course. I guess downloading and storing foreign layers is OK. I think we just don't push foreign layers to remote registries (or export to a tar) by default; according to what you said, this seems like the default behavior. Then we could have a Jib config option to allow pushing foreign layers. We will need to generate different manifests in these two cases.
This does seems like a lot of work to do. Not sure if we will prioritize this, given that this is mostly an issue when trying to build Windows images. But if we can confirm at the current state that Windows images built by Jib more or less work, then I can imagine this will suddenly become a low-hanging fruit and get prioritized.
I will try to think some way to easily check this. Doesn't seem easy though.
Then we could have a Jib config option to allow pushing foreign layers.
Exactly! It would be great to read the default value of this option from the daemon.json
, but it would be OK for me to start with no-foreign-layers-push behavior (and so implement the pushing later).
Thanks for helping us. I need to ask a few things.
-
Can you upload the contents of the following JSON file? I'm just interested in the
rootfs
field. (You can drag-and-drop a file to upload it to a GitHub comment.)"Config": "68204dc2fc12499d321ef43f00c439bbf2300b280d14737a92841d4b22ddd58c.json",
-
For the following layers, you'll at least see these 11 directories (and the
layer.tar
file in them)."Layers": [ "37c83953227860836879b642445fc27e45d6fe2bd1769735aaec5c8f6e433a83/layer.tar", "87e7927c27910829d8243d1da61a7c1b2a0afd2d9b646925547d935702cb6327/layer.tar", "dbb2f979450bf030f6be414c68f02a6a946abe88ebefc82f7dade9867bf21fcf/layer.tar", "a61d49e4daf1c8b59e415afc31b46961631d5871958b9d25fce050f4315efabb/layer.tar", "cab147056182abc9913c3994534f1dcf3e0704656da211f88180ec3980db2c31/layer.tar", "03e7139f1f6fd8bc270feb9bcf39bbd4d6774d35c56d4fc9b307deebed3a3259/layer.tar", "389e2857fe8de615f4402cb0edac215243221fa550c9e67ab254e935619d7dc6/layer.tar", "b95acdc928cdf6a7bfe598072315667a4b8b821853ce32a1c9c05078acca89bb/layer.tar", "aef1765ebaed263151cbe0407992fef4b691962036c76027a955f03dc2d2ac8e/layer.tar", "5abb47614e023f81c54e5613a20231421b22abd2a55ddbe30e99300ee18d9e9e/layer.tar", "3e620caa05a0c46aadd58113e05d950351db6595e2c5e7c3553d97cfb7e69e89/layer.tar" ],
Do you have other directories (or files) for the following two layers? Where are they?
"LayerSources": { "sha256:f358be10862ccbc329638b9e10b3d497dd7cd28b0e8c7931b4a545c88d7f7cd6": { "mediaType": "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip", "size": 4069985900, "digest": "sha256:3889bb8d808bbae6fa5a33e07093e65c31371bcf9e4c38c21be6b9af52ad1548", "urls": [ "https://go.microsoft.com/fwlink/?linkid=837859" ] }, "sha256:ffce47ae4ffd0b88677730d9949223ae0f4de9c7b14fd7f23112a1724381dac8": { "mediaType": "application/vnd.docker.image.rootfs.foreign.diff.tar.gzip", "size": 1565830172, "digest": "sha256:d0c71fc8924e632b81de72fba055610c4a5259b2f6723e15f70662f7bc328184", "urls": [ "https://go.microsoft.com/fwlink/?linkid=2057011" ] } }
For example, with an ordinary image with 4 layers, I have the following structure. Is there anything special about those two foreign layers?
$ tree
.
βββ 133a053ab717365e639c4bf51286123f6bb8b7f1c4aba77a703fd7418aceec48
βΒ Β βββ json
βΒ Β βββ layer.tar
βΒ Β βββ VERSION
βββ 39839d22db123ec4483dd5e2a7837c52598c3f245a9fdef0723e64a5b10d47ec
βΒ Β βββ json
βΒ Β βββ layer.tar
βΒ Β βββ VERSION
βββ bd59bf76c677d02b426c1eac67f4cefcd0c5c707fa18f6abc4db41070d77e63a
βΒ Β βββ json
βΒ Β βββ layer.tar
βΒ Β βββ VERSION
βββ ccc6e87d482b79dd1645affd958479139486e47191dfe7a997c862d89cd8b4c0.json
βββ f78587b8146af552a4b391558c48c72ebdda69746b2b82896989f163f3c5046b
βΒ Β βββ json
βΒ Β βββ layer.tar
βΒ Β βββ VERSION
βββ manifest.json
βββ repositories
Never mind. I managed to install Docker on Windows on GCE and checked these myself. For testing purposes, I think we can just ignore LayerSources
and keep moving forward. These layers are a subset of of Layers
. The exported tar has all the layer directories and tarballs.
Yes, to make it working, ignoring LayerSources
could move us further.
But please note the comment for the application/vnd.docker.image.rootfs.foreign.diff.tar.gzip
in the specification: "Layer", as a gzipped tar that should never be pushed.
These foreign layers are also called non-distributable, and they shouldn't be pushed, in case of Windows base images the reason is legal. So maybe it would be safer to parse LayerSources
correctly and do not push these layers. And for air-gaped use-cases, provide an option to allow pushing.
I tested building Windows images after ignoring LayerSources
. Unfortunately, Jib doesn't seem to build correct Windows images. I was getting a bizarre error with both jib:dockerBuild
and jib:buildTar
. Someone on the Internet said it could be due to corrupt image.
So I went further and hacked Jib so that it only generates base image layers and the "extra-directories" layer. That is, I manually excluded the classes, resources, and dependencies layersβall the application layers except for the extra-directories layers. Then I did some experiments:
-
Experiment 1: nothing under
src/main/jib
(i.e., an emptysrc/minb/jib
directory): That is, there's no extra-directories layer. In this case, the final image is just the base image with a new container configuration (such as newEntrypoint
set by Jib).mvn jib:dockerBuild
can successfully push the image to local Docker. The pushed image does work; I can dodocker run --rm -it --entrypoint cmd jib-image
. -
Experiment 2: with an empty (0-byte) file at
src/main/jib/empty.file
: This creates a single extra-directories layer.jib:dockerBuild
fails with the following error:'docker load' command failed with error: re-exec error: exit status 1: output: time="2020-02-12T21:25:07Z" level=error msg="hcsshim::ImportLayer - failed failed in Win32: The system cannot find the path specified. (0x3)" error="hcsshim::ImportLayer - failed failed in Win32: The system cannot find the path specified. (0x3)" importFolderPath="C:\\ProgramData\\docker\\tmp\\hcs188938223" path="\\\\?\\C:\\ProgramData\\docker\\windowsfilter\\b329e16827754ac442244b7b1403e7c74b7d969fca3fdb33a7c1863d5526f48d"
[ERROR] hcsshim::ImportLayer - failed failed in Win32: The system cannot find the path specified. (0x3)
Removing the empty file makes
jib:dockerBuild
succeed again, as expected.
Looks like it's not that easy to support building Windows images. It's possible that the way Jib assembles layer tarballs is incompatible on Windows.
I've seen that Windows image layer tarballs have PAX extension headers to store some file metadata.
tar: Ignoring unknown extended header keyword 'LIBARCHIVE.creationtime'
tar: Ignoring unknown extended header keyword 'MSWINDOWS.fileattr'
d--------- 0/0 0 2020-01-15 01:37 Files
For example, this is documented in the OCI spec. And some helpful comment. However, I don't understand them and have no idea how exactly the extension headers should look like.
I've found some related code in BuildKit, but it doesn't seem trivial: https://github.com/moby/buildkit/blob/b5fb8c4428df714b8d5c6c84cac24cd54b769adf/util/winlayers/differ.go#L26-L28 https://github.com/moby/buildkit/blob/b5fb8c4428df714b8d5c6c84cac24cd54b769adf/util/winlayers/differ.go#L194-L204 https://github.com/moby/buildkit/blob/b5fb8c4428df714b8d5c6c84cac24cd54b769adf/util/winlayers/differ.go#L173-L192
@augi unfortunately, it seems like a long way to have Windows container support in Jib. Most likely we won't be able to look into this in the near future.
Thank you for investigating this. I've also tried to fix the LayerSources
problem locally, but unfortunately ended with the same issue as you :(
I tested building Windows images after ignoring
LayerSources
. Unfortunately, Jib doesn't seem to build correct Windows images. I was getting a bizarre error with bothjib:dockerBuild
andjib:buildTar
. Someone on the Internet said it could be due to corrupt image.So I went further and hacked Jib so that it only generates base image layers and the "extra-directories" layer. That is, I manually excluded the classes, resources, and dependencies layersβall the application layers except for the extra-directories layers. Then I did some experiments:
- Experiment 1: nothing under
src/main/jib
(i.e., an emptysrc/minb/jib
directory): That is, there's no extra-directories layer. In this case, the final image is just the base image with a new container configuration (such as newEntrypoint
set by Jib).mvn jib:dockerBuild
can successfully push the image to local Docker. The pushed image does work; I can dodocker run --rm -it --entrypoint cmd jib-image
.- Experiment 2: with an empty (0-byte) file at
src/main/jib/empty.file
: This creates a single extra-directories layer.jib:dockerBuild
fails with the following error:'docker load' command failed with error: re-exec error: exit status 1: output: time="2020-02-12T21:25:07Z" level=error msg="hcsshim::ImportLayer - failed failed in Win32: The system cannot find the path specified. (0x3)" error="hcsshim::ImportLayer - failed failed in Win32: The system cannot find the path specified. (0x3)" importFolderPath="C:\\ProgramData\\docker\\tmp\\hcs188938223" path="\\\\?\\C:\\ProgramData\\docker\\windowsfilter\\b329e16827754ac442244b7b1403e7c74b7d969fca3fdb33a7c1863d5526f48d"
[ERROR] hcsshim::ImportLayer - failed failed in Win32: The system cannot find the path specified. (0x3)
Removing the empty file makesjib:dockerBuild
succeed again, as expected.Looks like it's not that easy to support building Windows images. It's possible that the way Jib assembles layer tarballs is incompatible on Windows.
@chanseokoh Hi, I have been struggling with this issue (so long as add one layer, hcsshim::ImportLayer error will happen) recently.
I checked the source code of containerd and hcsshim, but seem no special thing found. So have to highly suspect the jibcore generate the Image has problem. Just want to consult if you have resolve this issue? thanks
The hard conclusion is that Jib doesn't support building Windows images. It just doesn't work for many many reasons.
Thanks for your info, agree, the fact show that the conclusion is clear. Meanwhile, have to say Windows OS seem be forgotten and placed outside by container technology.