image-spec
image-spec copied to clipboard
Formalize support for zstd compression: v1.1.0 ?
While reviewing https://github.com/moby/moby/pull/40820, I noticed that support for zstd was merged in master (proposal: https://github.com/opencontainers/image-spec/issues/787, implementation in https://github.com/opencontainers/image-spec/pull/788 and https://github.com/opencontainers/image-spec/pull/790), and some runtimes started implementing this;
- containerd: https://github.com/containerd/containerd/pull/3649
- containers/image: https://github.com/containers/image/pull/563
- (in progress) docker / moby: https://github.com/moby/moby/pull/40820 (currently for "extracting" only)
However, the current (v1.0.1) image-spec does not yet list zstd as a supported compression, which means that not all runtimes may support these images, and the ones that do are relying on a non-finalized specification, which limits interoperability (something that I think this specification was created for in the first place).
I think the current status is not desirable; not only does it limit interoperability (as mentioned), it will also cause complications Golang projects using this specification as a dependency; go modules will default to the latest tagged release, and some distributions (thinking of Debian) are quite strict about the use of unreleased versions. Golang project that want to support zstd would either have to "force" go mod to use a non-released version of the specification, or work around the issue by using a custom implementation (similar to the approach that containerd took: https://github.com/containerd/containerd/pull/3649).
In addition to the above, concerns were raised about the growing list of media-types (https://github.com/opencontainers/image-spec/issues/791), and suggestions were made to make this list more flexible.
The Image Manifest Property Descriptions, currently describes:
Implementations MUST support at least the following media types:
- application/vnd.oci.image.layer.v1.tar
- application/vnd.oci.image.layer.v1.tar+gzip
- application/vnd.oci.image.layer.nondistributable.v1.tar
- application/vnd.oci.image.layer.nondistributable.v1.tar+gzip
Followed by:
...An encountered mediaType that is unknown to the implementation MUST be ignored.
This part is a bit ambiguous (perhaps that's just my interpretation of it though);
- Should an implementation pull a manifest, and skip (ignore) layers with unknown compression, or should it produce an error?
- If the
+zstdlayer mediatype is not in theMUSTlist, is there any reason for including it in the list of OCI Media Types? After all, any media types not included in the list "could" be supported by an implementation, and must otherwise be ignored.
What's the way forward with this?
- Tag current master as
v1.1.0, only defining+zstdas a possible compression format for layers, but no requirement for implementations of thev1.1.0specification to support them - Add the
+zstdcompression format to the list of required media types, and tagv1.1.0; projects implementingv1.1.0of the specification MUST support zstd layers, or otherwise implementv1.0.x - Wait for the discussion about "generic" layer types (https://github.com/opencontainers/image-spec/issues/791, https://github.com/opencontainers/image-spec/issues/799) to be completed before tagging
v1.1.0 - Do a
v1.1.0release (1.or2.), and leave3.for a future (v1.2.0) release of the specification.
On a side-note, I noticed that the vnd.oci.image.manifest.v1+json was registered, but other mediatypes, including media-types for image layers are not; should they be?
@jonjohnsonjr @vbatts @mikebrow @dmcgowan @SteveLasker ptal
(not sure if this is the right location for this discussion, or if it should be discussed in the OCI call; I just noticed this, so thought I'd write it down 😬 😅)
Should an implementation pull a manifest, and skip (ignore) layers with unknown compression, or should it produce an error?
I had similar issues interpreting "ignore". The containers/image library errored out for a couple of weeks last year, which blew up for @tych0. Now, it allows for pulling and storing the images.
In case of a call, I will do my best to join.
I must admit I'm not the most proficient reader of specifications, but good to hear I'm not the only person that was a bit confused by it 😅 (which may warrant expanding that passage a bit to clarify the intent).
I guess "ignoring" will lead to an "error" in any case, because skipping "unknown media types" should likely lead to a failure to calculate the digest 🤔. Still, having some more words to explain would be useful.
Thanks, @thaJeztah! I also felt some relief :smile:
@tych0, could you elaborate a bit on your use case? I don't want to break you a second time :angel:
I'm not sure (3) solves the underlying problem here. That defines a way for understanding the media type, but it doesn't necessarily mean that clients can handle all possible permutations of a media type. The main issue is that if clients start pushing up images with zstd compression, older (most existing today) clients will not be able to use them. With that in mind, making it a requirement and release 1.1 with this change at least makes that problem more explicit and the solution more clear. Any client which supports OCI image 1.1 can work with zstd, older clients might not. I am not sure the generic layer types is really a specification change as much as a tooling change, it may allow the image spec at that point to support more options. The media types supported here should always be explicit though imo.
@tych0, could you elaborate a bit on your use case?
Sure, I'm putting squashfs files in OCI images instead of gzipped tarballs, so I can direct mount them instead of having to extract them first. The "MUST ignore" part of the standard lets me do this, because tools like skopeo happily copy around OCI images with underlying blob types they can't decode.
If we suddenly change the standard to not allow unknown blob types in images and allow tools to reject them, use cases like this will no longer be possible.
Indeed, the standard does not need to change for docker to generate valid OCI images with zstd compression. The hard work goes into the tooling on the other end, but presumably docker has already done that.
It might be worth adding a few additional known blob types to the spec here: https://github.com/opencontainers/image-spec/blob/master/media-types.md#oci-image-media-types but otherwise I don't generally understand the goals of this thread.
If we suddenly change the standard to not allow unknown blob types in images and allow tools to reject them, use cases like this will no longer be possible.
I think in case of Skopeo, Skopeo itself is not consuming the image, and is used as a tool to pull those images; I think that's more the "distribution spec" than the "image spec" ?
I think a runtime that does not support a specific type of layer should be able to reject that layer, and not accept "any" media-type. What use would there be for a runtime to pull an image with (say) image/jpeg as layer-type; should it pull that image and try to run it?
For such cases, I think it'd make more sense to reject the image (/layer).
I think in case of Skopeo, Skopeo itself is not consuming the image, and is used as a tool to pull those images; I think that's more the "distribution spec" than the "image spec" ?
No; the distribution spec is for repos serving content over http. skopeo translates to/from OCI images according to the OCI images spec.
I think a runtime that does not support a specific type of layer should be able to reject that layer, and not accept "any" media-type. What use would there be for a runtime to pull an image with (say) image/jpeg as layer-type; should it pull that image and try to run it?
If someone asks you to run something you can't run, I agree an error is warranted. But in the case of skopeo, it is a tool that is perfectly capable of handling layers with mime types it doesn't understand, and I think similar tools should not error out either.
No; the distribution spec is for repos serving content over http. skopeo translates to/from OCI images according to the OCI images spec.
Yeah, poor choice of words; was trying to put in words that Skopeo itself is not the end-consumer of the image (hope I'm making sense).
But in the case of skopeo, it is a tool that is perfectly capable of handling layers with mime types it doesn't understand, and I think similar tools should not error out either.
The confusion in the words picked in the specs is about "mime types it doesn't understand". What makes a tool compliant with the image-spec? Should it be able to parse the manifest, or also be able to process the layers? Is curl | jq compliant?
While I understand the advantage of having some flexibility, if the spec does not dictate anything there, how can I know if an image would work with some tool implementing image-spec "X" ?
Currently it MUST ignore things it doesn't understand, which (my interpretation) says that (e.g.) any project implementing the spec MUST allow said image with an image/jpeg layer. On the other hand, it also should be able to extract an OCI Image into an OCI Runtime bundle. In your use-case, the combination of Skopeo and other tools facilitate this (Skopeo being the intermediary).
For Skopeo's case, even though the mediaType is "unknown to the implementation", Skopeo is able to "handle" / "process" the layer (within the scope it's designed for), so perhaps "unknown" should be changed to something else; e.g.implementations should / must produce an error if they're not able to "handle" / "process" a layer-type.
e.g.implementations should / must produce an error if they're not able to "handle" / "process" a layer-type.
That seems like a reasonable clarification to me!
@thaJeztah
Regarding the ambiguity of the MUST clause. The intention of that sentence is to say that implementations should act as though the layer (or manifest) doesn't exist if it doesn't know how to do whatever the user has requested, and should use an alternative layer (or manifest) if possible. This is meant to avoid implementations just breaking and always giving you an error if some extension was added to an image which doesn't concern that implementation -- it must use an alternative if possible rather than giving a hard error. Otherwise any new media-types will cause endless problems.
In the example of pulling image data, arguably the tool supports pulling image data regardless of the media-type so there isn't any issue of it being "unknown [what to do with the blob] to the implementation" -- but if the image pulling is being done in order for an eventual unpacking step then you could argue that it should try to pull an alternative if it doesn't support the image type.
I agree this wording could be a bit clearer though, this change was done during the period of some of the more contentious changes to the image-spec in 2016. Given that the above was the original intention of the language, I don't think it would be a breaking change to better clarify its meaning.
On a side-note, I noticed that the vnd.oci.image.manifest.v1+json was registered, but other mediatypes, including media-types for image layers are not; should they be?
This is being worked on by @SteveLasker. The idea was to first register just one media-type so we get an idea of how the process works, and then to effectively go and register the rest.
Another issue with the current way of representing compression is that the ordering of multiple media type modifiers (such as compression or encryption) isn't really well-specified since MIME technically doesn't support such things. There was some discussion last year about writing a library for dealing with MIME types so that programs can easily handle such types, but I haven't seen much since then.
On a side-note, I noticed that the vnd.oci.image.manifest.v1+json was registered, but other mediatypes, including media-types for image layers are not; should they be?
This is being worked on by @SteveLasker. The idea was to first register just one media-type so we get an idea of how the process works, and then to effectively go and register the rest.
Ack: please assume the other mediaTypes will be registered. I'm providing clarity in the Artifacts Spec to help with both these issues. Once the Artifacts spec is merged, with clarity on the registration process, I'll register the other types.
For the compression, what I think we're saying is this:
Tools that work specifically on a type, for instance runnable images like application/vnd.oci.image.config.v1+json should know about all layer types for a specific version. In this case, v1 vs. v1.1. The spec for each artifact provides that detail so clients know what they must expect. The artifact specific spec might say compression is optional, and a fallback must be provided. But, I don' know if it's realistic to say a tool could push a new layer type without it being in the spec and be considered valid.
There are other tools, like skopeo, (I think) or ORAS which work on any artifact type pushed to a registry. In these cases, they need to know some conventions to be generic. But, in the case of ORAS, it intentionally doesn't know about a specific artifact type and simply provides auth, push, pull of layers associated with a manifest. It's the outer wrapper, like Helm or Singularity that provide specific details on layer processing.
We have an open agenda for the 4/22 call to discuss.
I see I forgot to reply to some of the comments
Regarding the ambiguity of the MUST clause. The intention of that sentence is to say that implementations should act as though the layer (or manifest) doesn't exist if it doesn't know how to do whatever the user has requested, and should use an alternative layer (or manifest) if possible. This is meant to avoid implementations just breaking and always giving you an error if some extension was added to an image which doesn't concern that implementation -- it must use an alternative if possible rather than giving a hard error. Otherwise any new media-types will cause endless problems.
So, I was wondering about that: I can see this "work" for a multi-manifest(ish) image, in which case there could be multiple variations of an image (currently used for multi-arch), and I can use "one" of those, but I'm having trouble understanding how this works for a single image.
What if an image has layers with mixed compression?
- extract only those that I "understand" and try to construct a rootfs?
- what if I understand all of those compressions? (say, the image has both zstd and gzip compressed layers);
- should I "pick one", and "cherry-pick" all layers with the same compression?
- should I "pick all" layers, extract them, and construct the rootfs?
I think it's technically possible to have mixed compressions. For example, in a situation where an existing image is pulled (using, e.g. gzip compressed layers), and extending the image (add a new layer) using zstd, then pushing the image.
However, the "reverse" could also make a valid use-case, to create a "fat/hybrid" image, offering alternative compressions for systems that support it ("gzip" layers for older clients, "zstd" for newer clients that support it).
Looks like this needs further refinement to describe how this should be handled.
Ack: please assume the other mediaTypes will be registered. I'm providing clarity in the Artifacts Spec to help with both these issues. Once the Artifacts spec is merged, with clarity on the registration process, I'll register the other types.
Thanks! I recall seeing a discussion (on the mailing list?) about registering, but noticed "some" were registered, but others were not, so thought I'd check 👍
Yes, absolutely agree with Sebastiaan, picking some layers you understand and rejecting the rest is meaningless, and the semantics are not defined. There is no way to construct an image with zstd compression that is compatible with both older and newer clients. This only works for very limited workflows where you synchronously update all your clients and then update the images you generate, it does not work at all for people wanting to distribute public images, for example, where basically you cannot use zstd because there is no way to make an image anyone can use. A manifest list mechanism would be workable, but the current design just doesn't seem fit for purpose, and I think we should revert it.
I think the way to move forward is to add support for zstd to the different clients but still keep the gzip compression as the default.
Generating these images should not be the default yet, but the more we postpone zstd support in clients, the more it will take to switch to it.
I don't see anything wrong if an older client, in 1-2 years will fail to pull newer images.
The problem is that currently the correct behavior is effectively "undefined". See my earlier comment about layers using mixed compression (which IMO should be a valid use case). Without any definition how these images should be handled, it would not be possible to keep them interoperable.
On 9 Dec 2020, at 14:37, Giuseppe Scrivano [email protected] wrote:
I think the way to move forward is to add support for zstd to the different clients but still keep the gzip compression as the default.
Generating these images should not be the default yet, but the more we postpone zstd support in clients, the more it will take to switch to it.
I don't see anything wrong if an older client, in 1-2 years will fail to pull newer images.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub, or unsubscribe.
What about just adding the clarification you already proposed above, i.e.
e.g.implementations should / must produce an error if they're not able to "handle" / "process" a layer-type.
Doesn't that define it well enough?
Unfortunately, it doesn't, because for runtimes that support both zstd and gzip, selection is now ambiguous.
Take the following example;
{
"layers": [
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 12345,
"digest": "sha256:deadbeef"
},
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+zstd",
"size": 34567,
"digest": "sha256:badcafe"
}
]
}
The above would be ambiguous, as it could either mean;
- A "fat" single layer image, providing alternative layers in
zstdandgzipformat (for older clients) - A two-layer image, with the first layer in
gzipand the second layer inzstdcompression
In the above, 1. is a likely scenario for registries that want to provide "modern" compression, but provide backward compatiblity, and 2. is a likely scenario where a "modern" runtime built an image, using a parent image that is not available with zstd compression.
While it's possible to define something along the lines of "MUST" pick one compression, and only use layers with the same compression, this would paint us in a corner, and disallow use-case 2. (and future developments along the same line).
All of this would've been easier if digests were calculated over the non-compressed artifacts (and compression being part of the transport), but that ship has somewhat sailed. Perhaps it would be possible with a new media-type (application/vnd.oci.image.config.v1+json+raw), indicating that layers/blobs in the manifest are to be considered "raw" data (non-compressed, or if compressed, hash was calculated over the data as-is). In that case, clients and registries could negotiate a compression during transport (and for storage in the registry, compression/storage optimisation would be an implementation detail)
I don't think case 1 you've provided is legal. Per https://github.com/opencontainers/image-spec/blob/master/manifest.md#image-manifest-property-descriptions we have,
"The final filesystem layout MUST match the result of applying the layers to an empty directory."
So I think the specification already states that it must be case 2.
yes, I think it should be case 2, an image made of two different layers. It would be very confusing to support case 1 this way.
The final filesystem layout MUST match the result of applying the layers to an empty directory
"Applying the layers" is very ambiguous combined with the other requirements (more below:)
yes, I think it should be case 2, an image made of two different layers. It would be very confusing to support case 1 this way
Which means that there's no way to have images that are compatible with both any of the existing runtimes and runtimes that support zstd.
As
Implementations
MUSTsupport at least the following media types: ...An encountered mediaType that is unknown to the implementationMUSTbe ignored.
Which means that any of the current runtimes MUST ignore the zstd layer, and then applying the remaining layers.
Which means that there's no way to have images that are compatible with both any of the existing runtimes and runtimes that support zstd.
I don't think that's what it means at all. It means it won't work this specific way, but I can imagine other ways in which it would.
Which means that any of the current runtimes MUST ignore the zstd layer, and then applying the remaining layers.
That's why I think your proposed clarification is useful: runtimes who can't "process" the layer should error out when asked to. In particular, that's exactly what will happen in current implementations: they will try to gunzip the zstd blob, realize they can't, and fail.
but I can imagine other ways in which it would.
Can you elaborate on what other ways?
Can you elaborate on what other ways?
Sure, but I don't think it's relevant for whether or not zstd support should be in the spec. With your proposed clarification, I think the spec would be very clear about the expected behavior when runtimes encounter blobs they don't understand (and for tools like e.g. skopeo, who can shuttle these blobs around without understanding them, which is my main concern).
We are already using non-standard mime types in layers at my organization, and because the tooling support for this is not very good, right now we just disambiguate by using a "foo-squashfs" tag for images that are squashfs-based, and a "foo" tag for the tar-based ones.
However, since tag names are really just annotations, you could imagine having an additional annotation, maybe "org.opencontainers.ref.layer_type" to go along "org.opencontainers.ref.name" that people use as tags, that would just be the layer type. Then, in a tool like skopeo, you would do something like skopeo copy oci:/source:foo oci:/dest:foo --additiona-filter=org.opencontainers.ref.layer_type=zstd (or maybe skopeo would introduce a shorthand for this). Tools could then ignore layer types their users aren't interested or they don't know how to support. If there's no manifest with the tag matching the filters that a client knows how to consume, it would fail.
To make this backwards compatible, I suspect always listing the tar-based manifest as the first one in the image would mostly work, assuming tools don't check for multiple images with the same tag and fail. But maybe it wouldn't, I haven't really played around with it. In any case, just using tags to disambiguate works totally fine, even though it's ugly and better tooling support would be appreciated.
Adding new compression formats to a specific type is goodness to bring that artifact forward with new capabilities. Providing consistent behavior across an ecosystem of successful deployment of multiple versions seems the problem. Isn't this effectively what versioning provides? While a newer client might know how to process old and new compression formats, how do we get to the point where we have some stability.? This seems like a pivot for getting a different result, based on what capabilities the client supports. If the client supports version 1 and 2, it should default to version 2. If the client only supports version 1, it knows to pull version 1. If the registry only has version 2, there's a failure state.
This is very akin to the multi-arch approach.
The client asks the registry for hello-world:latest and also states it's ARM.
The registry says, umm, I don't have an arm version of hello-world:latest, so it fails.
I'm not saying we should actually use multi-arch manifests, but the concept is what we seem to need here.
For reference, we debated this with Teleport. We didn't want to change the user model, or require image owners to publish a new format. When someone pushes content to a teleport enabled registry, we automatically convert it. When the client makes a request, it sends header information that says it supports teleport. The registry can then hand back teleport references to blobs.
So, there are two models to consider here:
- The end to end container format has a new compression format, and it appears to be a version change.
- The compression format can be handled on the server.
This is also similar to what registries do with docker and OCI manifests. They get converted on the fly. I recognize converting a small json file is far quicker than multi-gb blobs.
Ultimately, it seems like we need to incorporate the full end to end experience and be careful to not destabilize the e2e container ecosystem while we provide new enhancements and optimizations.
and for tools like e.g. skopeo, who can shuttle these blobs around without understanding them, which is my main concern
(IIUC) tools like skopeo should not be really affected for your specific use-case as they for that use-case are not handling the actual image, and are mainly used as a tool to do a full download of whatever artifacts/blobs are referenced (also see my earlier comments https://github.com/opencontainers/image-spec/issues/803#issuecomment-616819168 and https://github.com/opencontainers/image-spec/issues/803#issuecomment-617073165)
However, since tag names are really just annotations, you could imagine having an additional annotation, maybe "org.opencontainers.ref.layer_type" to go along "org.opencontainers.ref.name" that people use as tags, that would just be the layer type. Then, in a tool like skopeo, you would do something like skopeo copy oci:/source:foo oci:/dest:foo --additiona-filter=org.opencontainers.ref.layer_type=zstd
I feel like this is now replicating what manifest-lists were for (a list of alternatives to pick from); manifest lists currently allow differentiating on architecture, and don't have a dimension for "compression type". Adding that would be an option, but (for distribution/registry) may mean an extra roundtrip (image/tag -> os/architecture variant -> layer-compression variant), or add a new dimension besides "platform".
Which looks to be what @SteveLasker is describing as well;
I'm not saying we should actually use multi-arch manifests, but the concept is what we seem to need here.
Regarding;
This is also similar to what registries do with docker and OCI manifests. They get converted on the fly. I recognize converting a small json file is far quicker than multi-gb blobs.
Docker manifests are OCI manifests; I think the only conversion currently still present is for old (Schema 2 v1) manifest (related discussion on that in https://github.com/opencontainers/distribution-spec/issues/212), and is being discussed to deprecate / disable (https://github.com/docker/roadmap/issues/173)
I'd be hesitant to start extracting and re-compressing artifacts. This would break the contract of content addressability, or more specific: what guarantee do I have that the re-compressed artifact has the same content as the artifact that was pushed?. If we want to separate compression from artifacts, then https://github.com/opencontainers/image-spec/issues/803#issuecomment-741844624 is probably a better alternative;
All of this would've been easier if digests were calculated over the non-compressed artifacts (and compression being part of the transport)
@SteveLasker unfortunately recompression is too CPU intensive and slow to make it workthwhile doing in-registry conversion for most purposes (we looked into this a while back, the CPU costs more than the bandwidth saving).
IIUC) tools like skopeo should not be really affected for your specific use-case
You'd think that, but it has broken before: https://github.com/containers/image/pull/801 Hence my concern about similar issues in this thread :)
I feel like this is now replicating what manifest-lists were for
Yes, possibly. I haven't thought about it very hard.
Going to brain dump some ideas from the OCI call before they're lost to time...
As per this comment we could add a new dimension to manifest lists. Maybe as a new field, but we already have "annotations", which we could [ab]use for this.
My first thought would be to rely on the fact that most clients take the first compatible option when resolving a manifest list. I believe (https://github.com/opencontainers/image-spec/issues/581 and other issues) that the exact semantics for resolution here were discussed to death and we never standardized on anything (just up to the implementer).
1. Abuse ordering
If we did rely on ordering (which feels gross), something like this (strings obviously changed) could work:
{
"schemaVersion": 2,
"manifests": [
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 1152,
"digest": "sha256:c95b7b93ccd48c3bfd97f8cac6d5ca8053ced584c9e8e6431861ca30b0d73114",
"platform": {
"architecture": "amd64",
"os": "linux"
}
},
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"size": 1072,
"digest": "sha256:9e2bcf20f78c9ca1a5968a9228d73d85f27846904ddd9f6c10ef2263e13cec4f",
"platform": {
"architecture": "amd64",
"os": "linux"
},
"annotations": {
"zstd": "true"
}
}
],
"annotations": {
"zstd": "true"
}
}
Top-level "annotations" here can indicate to new clients that they should look for zstd-compatible images instead of just picking the first thing they come across. You could define the annotation such that for each gzipped image, there must be an equivalent zstd-compressed image.
The per-manifest descriptor annotation would indicate which one is zstd. Older clients would just pick the first one, which would have gzipped layers, but it's probably not a great idea to rely on that?
2. Alternative image in annotation
Someone (@cpuguy83 I think?) mentioned stuffing alternatives in annotations. I can imagine two approaches here that would be backward compatible:
{
"schemaVersion": 2,
"manifests": [
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 1152,
"digest": "sha256:c95b7b93ccd48c3bfd97f8cac6d5ca8053ced584c9e8e6431861ca30b0d73114",
"platform": {
"architecture": "amd64",
"os": "linux"
},
"annotations": {
"zstd-compressed-alternative": "{\"mediaType\":\"application/vnd.oci.image.manifest.v1+json\",\"size\":1072,\"digest\":\"sha256:9e2bcf20f78c9ca1a5968a9228d73d85f27846904ddd9f6c10ef2263e13cec4f\",\"platform\":{\"architecture\":\"amd64\",\"os\":\"linux\"}}"
}
}
]
}
Here we'd just escape the second descriptor from above and plop it in an annotation. When doing platform resolution, a client could check for this annotation and use it as an alternative if they support zstd compression.
One major drawback: clients that handle artifacts generically (e.g. to copy between registries) would not know about these descriptors, because they're not in manifests. You could hack around that by appending these to the end of the manifest list with garbage platform values that will never be true, but that also seems kind of gross?
{
"schemaVersion": 2,
"manifests": [
{
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"size": 1152,
"digest": "sha256:c95b7b93ccd48c3bfd97f8cac6d5ca8053ced584c9e8e6431861ca30b0d73114",
"platform": {
"architecture": "amd64",
"os": "linux"
},
"annotations": {
"zstd-compressed-alternative": "{\"mediaType\":\"application/vnd.oci.image.manifest.v1+json\",\"size\":1072,\"digest\":\"sha256:9e2bcf20f78c9ca1a5968a9228d73d85f27846904ddd9f6c10ef2263e13cec4f\",\"platform\":{\"architecture\":\"amd64\",\"os\":\"linux\"}}"
}
},
{
"mediaType": "application/vnd.oci.image.manifest.v1+json",
"size": 1072,
"digest": "sha256:9e2bcf20f78c9ca1a5968a9228d73d85f27846904ddd9f6c10ef2263e13cec4f",
"platform": {
"architecture": "zstd",
"os": "zstd"
}
}
]
}
3. Alternative layer in annotation
Similar to above, but from within an image. To use the example from this comment:
{
"layers": [
{
"mediaType": "application/vnd.oci.image.layer.v1.tar+gzip",
"size": 12345,
"digest": "sha256:deadbeef",
"annotations": {
"zstd-compressed-alternative":"{\"mediaType\":\"application/vnd.oci.image.layer.v1.tar+zstd\",\"size\":34567,\"digest\":\"sha256:badcafe\"}"
}
}
]
}
Here we'd escape the zstd descriptor and stuff it into the equivalent gzip descriptor annotations.
Again, similar drawbacks to the second approach around generic artifact handling, but resolves ambiguity around mixed-compression layers vs alternative compression layers.