Native Container Image Tarball Asset Handling
Description
Extract and upload container image tarballs natively in TypeScript/JavaScript instead of delegating to CLIs (e.g. docker, podman).
Background
The AWS CDK primarily relies on hardcoded Docker CLI commands (e.g. docker load, docker build) to work with container images.
While there is the CDK_DOCKER environment variable (docs), it's not universally supported (https://github.com/aws/aws-cdk/issues/31317) and has no equivalent option in cdk.json (https://github.com/aws/aws-cdk/issues/31319).
Even if CDK_DOCKER was universally supported, the current approach creates a high maintenance burden. To support the many container management CLIs (e.g. docker, podman, skopeo, fincher, colima, nerdctl), build and upload commands (with potential option/flag passthroughs) need to be maintained for each.
Taken to the extreme, the current approach essentially requires maintaining a TypeScript binding for each container management CLI.
Proposal
CLI command orchestration isn't the intended purpose of the AWS CDK. This is better served by command runners and build systems like package.json scripts, Make, Just, Babashka, and more.
The CDK should instead only expect users to provide a path to a container image tarball. Container image tarballs are produceable by most container build CLIs (e.g. prior mentioned container management CLIs, buildah, kaniko) and libraries (e.g. Nixpkgs dockerTools, Jib).
When deploying assets, the CDK should upload the image to ECR without using a container management CLI unlike what TarballImageAsset does today with the Docker CLI.
More concretely, this likely requires:
- Using the
tar-stream+tar-fsortarpackages and thenode:zlibAPI orDecompressionStreamWeb API to extract and parse container image tarballs following the OCI image format specification.-
node:zlibis implemented in alternative JavaScript runtimes like Bun (docs) and Deno (docs), so this doesn't introduce a hard dependency on a specific JavaScript runtime. - The
DecompressionStreamWeb API is supported in Node.js (docs) and Deno (docs). Bun support is pending (https://github.com/oven-sh/bun/issues/1723).
-
- Using the AWS SDK to call
ecr:InitiateLayerUpload+ecr:UploadLayerPart+ecr:CompleteLayerUpload+ecr:PutImage.- There doesn't seem to be a well-maintained JavaScript client implementation of the OCI distribution specification.
- This likely wouldn't be desirable anyways because the specification requires layer chunks to be uploaded sequentially, while ECR doesn't seem to (i.e. allows for more parallelism).
- There doesn't seem to be a well-maintained JavaScript client implementation of the OCI distribution specification.
This new asset class must be usable by both L1 and L2 constructs (e.g. aws_lambda.CfnFunction and aws_lambda.Function).
OCI image index files need to be considered as well for multi-platform images.
Roles
| Role | User |
|---|---|
| Proposed by | @commiterate |
| Author(s) | Pending |
| API Bar Raiser | Pending |
| Stakeholders | Pending |
See RFC Process for details
Workflow
- [x] Tracking issue created (label:
status/proposed) - [ ] API bar raiser assigned (ping us at #aws-cdk-rfcs if needed)
- [ ] Kick off meeting
- [ ] RFC pull request submitted (label:
status/review) - [ ] Community reach out (via Slack and/or Twitter)
- [ ] API signed-off (label
status/api-approvedapplied to pull request) - [ ] Final comments period (label:
status/final-comments-period) - [ ] Approved and merged (label:
status/approved) - [ ] Execution plan submitted (label:
status/planning) - [ ] Plan approved and merged (label:
status/implementing) - [ ] Implementation complete (label:
status/done)
Author is responsible to progress the RFC according to this checklist, and apply the relevant labels to this issue so that the RFC table in README gets updated.
We use GitLab CI/CD runners for deployments. However, our runners do not allow Docker-in-Docker (DinD) or installation of the Docker daemon, due to security and platform restrictions.
Currently, many AWS CDK constructs (e.g., lambda.Code.fromDockerBuild) require Docker to be available at deploy time to build Lambda or container assets. This makes it impossible to use these constructs in our CI/CD pipelines, and forces us to use complex workarounds (like pre-building assets and patching CDK code to use fromAsset).
If CDK allowed us to provide a pre-built container image tarball or Lambda zip (produced by any OCI-compliant builder such as Kaniko, Buildah, or even a local developer machine), we could:
- Build assets in a separate CI job or on a developer workstation.
- Pass the resulting tarball/zip to CDK for deployment, with no Docker dependency on the runner.
- Use modern, secure, and scalable CI/CD environments that do not allow privileged Docker.
This would make CDK much more flexible and production-friendly for organizations with strict CI/CD security policies or cloud-native build systems.