To use ENV variable in EntryPoint
Hello All,
I have 2 different applications (web apps) and 1 common Dockerfile through which i am building the apps and containerize it. So far i was using full fledged images (debian); where in i was passing the exe name into the entrypoint as environment variable.
Now i was playing with distroless and its great. It offers lot size reduction. So i was doing the same and hit a road block.
I am unable to pass the environment variable to the entry point vector as shown:
FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build SHELL ["/bin/bash", "-c"] ARG csprojFileName COPY . /home/src RUN dotnet publish "$csprojFileName.csproj"
FROM mcr.microsoft.com/dotnet/runtime-deps:8.0-noble-chiseled ARG csprojFileName ENV CSPROJFILENAME $csprojFileName EXPOSE 8080 WORKDIR /app COPY --from=build /app/publish . ENTRYPOINT ["./$CSPROJFILENAME"]
i tried changing ENTRYPOINT as ENTRYPOINT ["./${CSPROJFILENAME}"]; but no luck.
Please help; what could be the alternative approach if above is not supported?
Thanks
Please help; what could be the alternative approach if above is not supported?
- Use a template language to generate a
Dockerfile, or one that can replace a placeholder from yourDockerfilewhen building the image. - Set the entrypoint at runtime
--entrypoint(CLI) /entrypoint:(compose). You can have your ENV for the CLI, or for Compose it can use an ENV file. - Rename the file itself to one that is deterministic
COPY --from=build /app/publish/${CSPROJFILENAME} /app/run+ENTRYPOINT ["/app/run"]
This is not a distroless specific issue. You cannot use ARG or ENV interpolation in ENTRYPOINT with the exec syntax, only with shell syntax (ENTRYPOINT "/app/${CSPROJFILENAME}") provided you have a shell to implicitly run it, this will prevent CMD from appending anything though, so you might as well skip ENTRYPOINT and just set CMD in the same manner 🤷♂️
My advice is to take approach 3, rarely should it matter that the binary name is dynamic for a minimal image since you're probably publishing the image with that same name (docker run --rm -it my-org/my-app arg1 arg2).
Please help; what could be the alternative approach if above is not supported?
- Use a template language to generate a
Dockerfile, or one that can replace a placeholder from yourDockerfilewhen building the image.- Set the entrypoint at runtime
--entrypoint(CLI) /entrypoint:(compose). You can have your ENV for the CLI, or for Compose it can use an ENV file.- Rename the file itself to one that is deterministic
COPY --from=build /app/publish/${CSPROJFILENAME} /app/run+ENTRYPOINT ["/app/run"]This is not a distroless specific issue. You cannot use ARG or ENV interpolation in
ENTRYPOINTwith theexecsyntax, only withshellsyntax (ENTRYPOINT "/app/${CSPROJFILENAME}") provided you have a shell to implicitly run it, this will preventCMDfrom appending anything though, so you might as well skipENTRYPOINTand just setCMDin the same manner 🤷♂️My advice is to take approach 3, rarely should it matter that the binary name is dynamic for a minimal image since you're probably publishing the image with that same name (
docker run --rm -it my-org/my-app arg1 arg2).
Thanks for the reply. In my case i have different apps with different name. so to run dotnet app; i need to provide dotnet command with the dll name right? if i am using runtime-deps image; then i have to give the exe name directly in entrypoint. so that is where the challenge is.
In my case i have different apps with different name. so to run dotnet app; i need to provide dotnet command with the dll name right?
I don't build dotnet images, but if you only have one app per image you build it's a non-issue. Just publish image with whatever name you like, inside the container can be generic name.
if i am using runtime-deps image; then i have to give the exe name directly in entrypoint. so that is where the challenge is.
I only have experience with Node, Rust, Go apps where it's a non-issue to use generic name.
but if you only have one app per image
This is probably the more appropriate way to use containers anyway. Limit them to their single purpose.
Limit them to their single purpose.
Yes that's often preferred, but sometimes you have multiple processes or services.
docker-mailserver is an example of an image for a full mail server, that while it could be split out and managed with much larger config to orchestrate (like some alternative projects), many users prefer the convenience of a single image for this as a whole. That image at startup runs a bunch of integration scripts to configure services before they are started, similarly some functionality requires interacting with their command tools or restarting their processes. Separate images are not as appropriate in that scenario since users don't directly configure individual services themselves.
I've other projects myself where two services need to run but share access such as to the same files or connecting via sockets/ports. You can again split those out to separate images, but it's not always ideal.
- nginx/caddy paired with PHP fastcgi comes to mind where the webserver needs to have access to the filesystem PHP files with their
try_filesdirectives, internal only ports like fastcgi would not need to be exposed from the container either. - Services that are fairly coupled together, and would otherwise be sensitive to breakage if not updated/released in sync. Not a big issue for internal usage if you control both, but for public projects more likely to get user bug reports or support expectations.
That said when it can be helped, I definitely prefer the isolation. If service management needs to be added like with supervisord (which then requires Python), it adds additional complexity (including log management).