sdk-container-builds
sdk-container-builds copied to clipboard
Constantly getting 404 errors for base images that exist and can be pulled by the Docker CLI
I am currently trying to use a custom base image for one of my containers, but for some reason I keep getting 404 errors from the "CreateNewImage" task provided by container builds.
Here is the exception that I get when I attempt to pull a private image from ghcr.io:
/Users/kirk/.nuget/packages/microsoft.net.build.containers/0.3.2/build/Microsoft.NET.Build.Containers.targets(114,9): error MSB4018: The "CreateNewImage" task failed unexpectedly. [/Users/kirk/repos/jatango/Jatango/src/Infinity.Host/Infinity.Host.csproj]
/Users/kirk/.nuget/packages/microsoft.net.build.containers/0.3.2/build/Microsoft.NET.Build.Containers.targets(114,9): error MSB4018: System.AggregateException: One or more errors occurred. (Response status code does not indicate success: 404 (Not Found).) [/Users/kirk/repos/jatango/Jatango/src/Infinity.Host/Infinity.Host.csproj]
/Users/kirk/.nuget/packages/microsoft.net.build.containers/0.3.2/build/Microsoft.NET.Build.Containers.targets(114,9): error MSB4018: ---> System.Net.Http.HttpRequestException: Response status code does not indicate success: 404 (Not Found). [/Users/kirk/repos/jatango/Jatango/src/Infinity.Host/Infinity.Host.csproj]
/Users/kirk/.nuget/packages/microsoft.net.build.containers/0.3.2/build/Microsoft.NET.Build.Containers.targets(114,9): error MSB4018: at System.Net.Http.HttpResponseMessage.EnsureSuccessStatusCode() [/Users/kirk/repos/jatango/Jatango/src/Infinity.Host/Infinity.Host.csproj]
/Users/kirk/.nuget/packages/microsoft.net.build.containers/0.3.2/build/Microsoft.NET.Build.Containers.targets(114,9): error MSB4018: at Microsoft.NET.Build.Containers.Registry.GetManifest(String repositoryName, String reference) in D:\a\_work\1\s\Microsoft.NET.Build.Containers\Registry.cs:line 136 [/Users/kirk/repos/jatango/Jatango/src/Infinity.Host/Infinity.Host.csproj]
/Users/kirk/.nuget/packages/microsoft.net.build.containers/0.3.2/build/Microsoft.NET.Build.Containers.targets(114,9): error MSB4018: at Microsoft.NET.Build.Containers.Registry.GetImageManifest(String repositoryName, String reference, String runtimeIdentifier, String runtimeIdentifierGraphPath) in D:\a\_work\1\s\Microsoft.NET.Build.Containers\Registry.cs:line 100 [/Users/kirk/repos/jatango/Jatango/src/Infinity.Host/Infinity.Host.csproj]
/Users/kirk/.nuget/packages/microsoft.net.build.containers/0.3.2/build/Microsoft.NET.Build.Containers.targets(114,9): error MSB4018: --- End of inner exception stack trace --- [/Users/kirk/repos/jatango/Jatango/src/Infinity.Host/Infinity.Host.csproj]
/Users/kirk/.nuget/packages/microsoft.net.build.containers/0.3.2/build/Microsoft.NET.Build.Containers.targets(114,9): error MSB4018: at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions) [/Users/kirk/repos/jatango/Jatango/src/Infinity.Host/Infinity.Host.csproj]
/Users/kirk/.nuget/packages/microsoft.net.build.containers/0.3.2/build/Microsoft.NET.Build.Containers.targets(114,9): error MSB4018: at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification) [/Users/kirk/repos/jatango/Jatango/src/Infinity.Host/Infinity.Host.csproj]
/Users/kirk/.nuget/packages/microsoft.net.build.containers/0.3.2/build/Microsoft.NET.Build.Containers.targets(114,9): error MSB4018: at Microsoft.NET.Build.Containers.Tasks.CreateNewImage.GetBaseImage() in D:\a\_work\1\s\Microsoft.NET.Build.Containers\CreateNewImage.cs:line 81 [/Users/kirk/repos/jatango/Jatango/src/Infinity.Host/Infinity.Host.csproj]
/Users/kirk/.nuget/packages/microsoft.net.build.containers/0.3.2/build/Microsoft.NET.Build.Containers.targets(114,9): error MSB4018: at Microsoft.NET.Build.Containers.Tasks.CreateNewImage.Execute() in D:\a\_work\1\s\Microsoft.NET.Build.Containers\CreateNewImage.cs:line 97 [/Users/kirk/repos/jatango/Jatango/src/Infinity.Host/Infinity.Host.csproj]
/Users/kirk/.nuget/packages/microsoft.net.build.containers/0.3.2/build/Microsoft.NET.Build.Containers.targets(114,9): error MSB4018: at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() [/Users/kirk/repos/jatango/Jatango/src/Infinity.Host/Infinity.Host.csproj]
/Users/kirk/.nuget/packages/microsoft.net.build.containers/0.3.2/build/Microsoft.NET.Build.Containers.targets(114,9): error MSB4018: at Microsoft.Build.BackEnd.TaskBuilder.ExecuteInstantiatedTask(ITaskExecutionHost taskExecutionHost, TaskLoggingContext taskLoggingContext, TaskHost taskHost, ItemBucket bucket, TaskExecutionMode howToExecuteTask) [/Users/kirk/repos/jatango/Jatango/src/Infinity.Host/Infinity.Host.csproj]
This is the .csproj config file for that specific project:
<Project Sdk="Microsoft.NET.Sdk.Web">
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<AssemblyName>Infinity.Host</AssemblyName>
<RootNamespace>Infinity.Host</RootNamespace>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
<ManagePackageVersionsCentrally>true</ManagePackageVersionsCentrally>
<ContainerImageName>=infinity</ContainerImageName>
<ContainerImageTag>latest</ContainerImageTag>
<ContainerBaseImage>ghcr.io/jatango/infinity-base:latest</ContainerBaseImage>
<Configurations>Debug;Release;Publish</Configurations>
<Platforms>AnyCPU</Platforms>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.NET.Build.Containers" />
</ItemGroup>
</Project>
This issue is also present when pulling base images from other registries including the Docker Hub (docker.io). I see no reason why this should fail given that the URL is in the correct format and a tag is present.
To verify that I am correctly authenticated, I ran docker pull ghcr.io/jatango/infinity-base:latest after logging into ghcr.io and everything worked perfectly, but when I re-ran dotnet publish, it continued to error out.
Version: 0.3.2 Platform: macOS 13.2 (arm64) Dotnet: 7.100
After further investigation it appears that this stems from the way that GitHub deals with multi-arch builds. When just publishing an linux/amd64 or linux/arm64 image, everything works perfectly, but when adding additional architectures, it begins to fail.
Recently, I noticed that they have added a new unknown/unknown architecture (this may be a bug), which is when this issue started occurring. We need to make sure that if unknown/unknown is available, we default to either linux/arm64 or linux/arm64 depending on the publish profile and machine architecture.
Wow, that's not was I was expecting to see at all - do you know of any publish-ish packages on ghcr.io that behave in this way? It would be good to have examples to test against.
Here is my build script, unfortunately I cannot publish the private image, but I might be able to push a public one for you to try.
docker buildx build --push --platform linux/arm64,linux/amd64 -f ./src/Infinity.Host/Base.Dockerfile -t ghcr.io/jatango/infinity-base:latest ./src/Infinity.Host
If you just run the normal docker build and push manually with a single arch, everything works as expected.
I think we could do the dotnet/runtime and/or dotnet/aspnet ones from https://github.com/dotnet/dotnet-docker and get into the same scenario but with fully-OSS packages. If that sounds good I'll try building and pushing those to this repo's ghcr package repository as persistent test artifacts for the future.
Yeah, might as well try, I want to see what happens when you use docker buildx to build and push those. I've tried doing it on darwin/arm64, darwin/amd64 and linux/amd64, same issue every time. If you just push the image by itself -> fine, push different tags for different archs -> fine, BUT only when you do a multiarch build and then push all at the same time under the same tag does it produce that unknown/unknown target. It appears that the Docker CLI is fine, it is able to resolve the correct architecture, but for some reason, this one does not work in container builds.
It appears that this is related to image Attestation storage: https://docs.docker.com/build/attestations/attestation-storage/
@baronfel Ok, here are my conclusions based on what I have found:
The Issue with Attestations
Apparently, docker buildx build appends some additional entires into the manifest for images called attestations, these take the form of additional hashes (one per arch), which have an annotation (vnd.docker.reference.digest) pointing them to the original arch's hash and an annotation indicating that the entry is an attestation (vnd.docker.reference.type).
Example from the Docker Docs:
{
"mediaType": "application/vnd.oci.image.index.v1+json",
"schemaVersion": 2,
"manifests": [
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"digest": "sha256:23678f31b3b3586c4fb318aecfe64a96a1f0916ba8faf9b2be2abee63fa9e827",
"size": 1234,
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"digest": "sha256:02cb9aa7600e73fcf41ee9f0f19cc03122b2d8be43d41ce4b21335118f5dd943",
"size": 1234,
"annotations": {
"vnd.docker.reference.digest": "sha256:23678f31b3b3586c4fb318aecfe64a96a1f0916ba8faf9b2be2abee63fa9e827",
"vnd.docker.reference.type": "attestation-manifest"
},
"platform": {
"architecture": "unknown",
"os": "unknown"
}
}
]
}
We can see here that there are two different manifests, one for the architecture, and one for the attestation. The attestation references the original image using the digest annotation and declares it's type as attestation-manifest.
I believe that GitHub Container Registry has an issue with this and incorrectly lists these attestations under the unknown/unknown arch since that is what they are reporting as their platform. Hence why we see three entries for a two arch package: linux/arm64, linux/amd64, and unknown/unknown. This appears to be a GitHub bug, so we don't need to worry about it other than the fact that it's confusing and helpfully indicates there might be a problem.
The Docker CLI must be able to navigate these attestations and determine image hash that needs to be pulled for a given arch. It appears that container builds cannot yet do this type of navigation within the manifest.
Temporary Solution
It appears that the temporary fix for this problem is to just tell docker buildx build to not produce any attestations. There are two types provenance and sbom, both of which must be disabled for this to work properly.
Here is the updated command to fix this:
docker buildx build -f <DOCKERFILE> -t <TAGS> --platform <PLATFORMS> --push --provenance=false --sbom=false <CONTEXT>
This tells Docker Buildx to totally ignore all attestations and they will not be uploaded to the registry with the image, thus allowing the container builds to successfully pull them.
Actual Solution
To fix this issue, we need to add support for navigating these manifest files with the attestations included. I'll take a look at the source code to see if I can help, but I am a bit busy right now. A PR on this would be super appreciated as it should fix any issues encountered by anyone who uses Docker Buildx and container builds.