Bug: Python container build fails with "Dockerfile: no such file or directory" in artifacts when custom Dockerfile exists
Okay, here's a draft for a GitHub issue you can post to the SST repository. I've tried to include all relevant details from our troubleshooting.
Please review and fill in any bracketed placeholders ([...]) with your specific information before posting.
Title: Bug: Python container build fails with "Dockerfile: no such file or directory" in artifacts when custom Dockerfile exists
Body:
Describe the bug
When attempting to deploy a Python Lambda function using python: { container: true } and providing a custom Dockerfile in the function's source directory, the sst deploy command fails. The error indicates that SST's Docker build process cannot find a Dockerfile within its internal artifact staging directory (.sst/artifacts/<FunctionName>-src/Dockerfile), even though a custom Dockerfile is present in the specified handler/bundle path.
This occurs even when the project structure and sst.aws.Function configuration seem to align with the SST documentation for custom Dockerfiles with Python container functions.
To Reproduce Steps to reproduce the behavior:
-
Project Structure:
notifications-monorepo/ ├── sst.config.ts ├── pyproject.toml (Root workspace definition) ├── functions/ │ └── datadog-forwarder/ │ └── logs_monitoring/ <-- Function source root │ ├── Dockerfile <-- Custom Dockerfile │ ├── pyproject.toml <-- Lambda's specific pyproject.toml │ ├── requirements.txt │ └── lambda_function.py (and other Python source files) └── infra/ └── functions/ └── datadog-forwarder.ts <-- SST Function definition -
notifications-monorepo/pyproject.toml(Root):[project] name = "notifications-monorepo-workspace" version = "0.1.0" requires-python = ">=3.12" dependencies = [] [tool.uv.workspace] members = [ "functions/datadog-forwarder/logs_monitoring" ] -
functions/datadog-forwarder/logs_monitoring/pyproject.toml(Lambda-specific):[project] name = "datadog-forwarder-lambda" version = "0.1.0" description = "Datadog Forwarder Lambda" requires-python = "==3.12.*" dependencies = [ "attrs", "bytecode", "cattrs", "certifi", "charset-normalizer", "datadog-lambda==6.104.0", "datadog==0.50.2", "ddsketch==3.0.1", "ddtrace==2.17.3", "deprecated", "envier", "exceptiongroup", "idna==3.7", "importlib-metadata", "opentelemetry-api", "protobuf", "requests-futures", "requests", "six", "typing-extensions", "urllib3>=1.26.19,<3.0", "wrapt==1.14.0", "xmltodict", "zipp", "ujson" ] # No [build-system] or [tool.uv.sources] -
functions/datadog-forwarder/logs_monitoring/Dockerfile(Custom Dockerfile):ARG PYTHON_VERSION=3.12 FROM public.ecr.aws/lambda/python:${PYTHON_VERSION} USER root RUN microdnf update -y && \ microdnf install -y git gcc findutils && \ microdnf clean all USER ${LAMBDA_UID} COPY --from=ghcr.io/astral-sh/uv:0.2.24 /uv /usr/local/bin/uv WORKDIR ${LAMBDA_TASK_ROOT} COPY pyproject.toml . COPY requirements.txt . RUN /usr/local/bin/uv pip install -r requirements.txt --target ${LAMBDA_TASK_ROOT} --system # COPY all necessary application files from Datadog forwarder source COPY lambda_function.py . COPY __init__.py . COPY resource.enc . COPY caching/ ./caching/ COPY customized_log_group.py . COPY enhanced_lambda_metrics.py . COPY forwarder.py . COPY retry/ ./retry/ COPY settings.py . COPY steps/ ./steps/ COPY telemetry.py . COPY trace_forwarder/ ./trace_forwarder/ CMD [ "lambda_function.lambda_handler" ] -
infra/functions/datadog-forwarder.ts(SSTFunctiondefinition):// ... imports ... export function DatadogForwarder() { const lambda = new sst.aws.Function("DatadogForwarder", { handler: "functions/datadog-forwarder/logs_monitoring/lambda_function.lambda_handler", runtime: "python3.12", python: { container: true, }, environment: { /* ... DD_API_KEY, DD_SITE, etc. ... */ }, link: [ /* ... */ ], memory: "512 MB", permissions: [ /* ... */ ], timeout: "300 seconds", }); return { datadogForwarder: lambda }; } -
Run
bun sst deploy --stage devnet --target DatadogForwarder --verbose(Docker daemon is running).
Expected behavior
SST should detect the custom Dockerfile in functions/datadog-forwarder/logs_monitoring/, copy it and the associated build context to .sst/artifacts/DatadogForwarder-src/, and then successfully build the Docker image using this custom Dockerfile. The deployment should then proceed, and the Lambda should run with its dependencies correctly installed (solving earlier ImportModuleError issues).
Actual behavior The deployment fails with the following error:
| Error DatadogForwarderImage docker-build:index:Image
docker-build:index:Image resource 'DatadogForwarderImage': property dockerfile.location value {<nil>} has a problem: open /Users/mike/dev/work/notifications-monorepo/.sst/artifacts/DatadogForwarder-src/Dockerfile: no such file or directory
✕ Failed
DatadogForwarderImage docker-build:index:Image
docker-build:index:Image resource 'DatadogForwarderImage': property dockerfile.location value {<nil>} has a problem: open /Users/mike/dev/work/notifications-monorepo/.sst/artifacts/DatadogForwarder-src/Dockerfile: no such file or directory
This indicates that SST's build process is not finding or placing the Dockerfile in the expected artifact staging location (.sst/artifacts/DatadogForwarder-src/) before attempting the Docker build.
Environment :
- SST Version:
3.16.0
Additional context
- This issue was encountered while trying to deploy the standard Datadog Forwarder Lambda function.
- The goal is to use a custom Dockerfile to ensure correct Python dependency installation, as the default zip-based bundling (even with
pyproject.tomlanduvworkspace setup) was resulting inImportModuleError: No module named 'requests'. - The project uses
uvworkspaces, with a rootpyproject.tomland apyproject.tomlfor the Lambda function. - The same error ("Dockerfile: no such file or directory" in artifacts) occurs whether using
bundleproperty or justhandlerproperty insst.aws.Functionwhenpython: { container: true }is set. - The issue seems to be that SST's mechanism for copying the custom
Dockerfileinto its build staging area is failing.
If you run bun sst deploy --stage devnet does that also give the error?
As a sanity check, does the example work for you? https://github.com/sst/sst/tree/dev/examples/aws-python-container
Is there an update on this? I am running into the same error
If you run
bun sst deploy --stage devnetdoes that also give the error?As a sanity check, does the example work for you? https://github.com/sst/sst/tree/dev/examples/aws-python-container
tried the example you referenced out of the box, cloned and deployed, error:
PythonFnCustom sst:aws:Function → PythonFnCustomImage docker-build:index:Image
failed to solve: process "/bin/sh -c dnf update -y && dnf install -y git gcc && dnf clean all" did not complete successfully: exit code: 127
which can be solved by just switching to yum as AWS images do not have dnf as a package manager.
is there anywhere a working example with latest sst version, in both live and container mode? @jayair the example you showed deployed successfully to aws but does not work in live mode (timeouts and UnauthorizedException)
error on live mode:
error="request failed with status 401: {\n \"errors\" : [ {\n \"errorType\" : \"UnauthorizedException\"\n } ]\n}
I'd appreciate if somebody found a fix and submitted a PR for this example.