copilot-cli icon indicating copy to clipboard operation
copilot-cli copied to clipboard

Re-using images or setting custom tags

Open Fodoj opened this issue 4 years ago • 10 comments

I have single code base that then can be started as a web service or a background worker (typical Ruby on Rails "majestic" monolith basically :D). I did it as two separate copilot apps, but problem is that currently I have to build image twice, even though it's the same image.

In copilot/app/manifest.yml:

name: app
type: Load Balanced Web Service
image:
  build: Containerfile     
  port: 3000 

In copilot/worker/manifest.yml:

name: worker

type: Backend Service

image:
  build: Containerfile

It's not that build's are slow - in the end second build just takes already built image, but storing two images is 2x ECR storage to pay for and 2x bandwith too.

It's a detail, but the way decision is made which service to start is inside the entrypoint:

#!/bin/bash

if [[ $COPILOT_SERVICE_NAME == 'worker' ]]; then
  start worker
else
  start app
fi

I would like either to re-use the app image as is, or may be tag latest build with "latest" and then inside worker manifest I can simply specify location: location: aws_account_id.dkr.ecr.region.amazonaws.com/my-app:latest

Fodoj avatar Feb 19 '21 16:02 Fodoj

Hello @Fodoj. Sorry I might be misunderstanding but isn't location already supported in Copilot https://aws.github.io/copilot-cli/docs/manifest/lb-web-service/#image-location? You can deploy the first one and then refer to the ECR repo URL in the second one's manifest to reuse the image.

iamhopaul123 avatar Feb 19 '21 18:02 iamhopaul123

@iamhopaul123 Yes and no, because I need to reference the tag and tag is different every time, because codebuild job id is used as a tag. So I can’t use location, as it requires fixed tag.

Fodoj avatar Feb 19 '21 22:02 Fodoj

Oooh got it. In that scenario could you use svc deploy --tag to make it a fixed tag?

Edit: sorry i didn't realize you were using pipeline. Yes, we need to provide a better user story on reusing the image for our pipeline.

iamhopaul123 avatar Feb 19 '21 22:02 iamhopaul123

I can also just modify the script in buildspec.yaml to implement custom tagging, but would be nicer to have it as a built in feature :-)

Fodoj avatar Feb 19 '21 22:02 Fodoj

I've also added some ideas how to improve this here: https://github.com/aws/copilot-cli/discussions/1965

Fodoj avatar Feb 24 '21 15:02 Fodoj

Hi @Fodoj

I'm also running a majestic Ruby on Rails monolith and wanted to start a background worker.

Two questions:

  • Have you figured out a way to avoid building the same image twice?
  • How would you go about sharing the same copilot addons (RDS + S3) created for my frontend, to this new worker app? e.g. share RDS the resources from copilot/frontend/addons/* with this new background worker app launched in copilot/worker/ so the worker can access the DB and S3 the same as the frontend.

Thanks!

dtbaker avatar Jun 10 '21 14:06 dtbaker

Have you figured out a way to avoid building the same image twice?

Nope :( But I didn't check in details since a while, may be Copilot has a way to do it now

How would you go about sharing the same copilot addons (RDS + S3) created for my frontend, to this new worker app? e.g. share RDS the resources from copilot/frontend/addons/* with this new background worker app launched in copilot/worker/ so the worker can access the DB and S3 the same as the frontend.

I would create DB and S3 in frontend copilot service, and database connection string is in Rails encrypted secrets, and S3 access is via IAM policy that is then could be duplicated (but may be not, I found that in my case permissions are slightly different in each service) :-)

Fodoj avatar Jun 10 '21 17:06 Fodoj

+1

In my case, I'm specifying different the entrypoints using the entrypoint field in the copilot service manifests, with the image.build.dockerfile and image.build.target set to the same value for the different services. It looks something like:

image:
  build:
    dockerfile: ./Dockerfile
    target: job

entrypoint: "php /pathtocodebase/job1.php"

and

image:
  build:
    dockerfile: ./Dockerfile
    target: job

entrypoint: "php /pathtocodebase/job2.php"

That said, using a shared entrypoint script that differentiates based on COPILOT_SERVICE_NAME would work fine as well.

DrewVartanian avatar May 25 '22 19:05 DrewVartanian

I am interested in this functionality so that I can re-use the built image as a sidecar. Adding my use-case here.

Background: Currently, I am running database migrations from an entrypoint.sh script called at startup. I am planning on migrating my base image to Distroless and since I will no longer have a shell, I need to have different entrypoint/command for each container using the same image (same Dockerfile, same dependencies).

I plan on running a sidecar and making it a startup: complete dependency. For instance:

image:
  build: ./Dockerfile  # Something like this: https://github.com/GoogleContainerTools/distroless/blob/main/examples/nodejs/Dockerfile
  depends_on:
    db-migrate: start
    startup: complete
entrypoint: ["node"]  # Default entrypoint of the image. shown here for clarity.
command: ["dist/main.js"]

sidecars:
  db-migrate:
    image: <some capability for reflection of the main app image here>
    entrypoint: ["node"]  # Default entrypoint of the image. shown here for clarity.
    command: ["migrate.js"]

--

A workaround would be:

  1. Use a custom buildspec.yml.
  2. Tag the image with the branch name.
  3. Use that image:tag in the copilot manifest.yml.

Basically, copying the image build part of the buildspec.yml (copilot v1.15) post_build: section into the build: section, tagging with branch name.

--

Another potential workaround that might have issues:

  1. Add another service to the pipeline (copilot v1.18) for the init/db-migration.
  2. Set up a dependency (deployment order) on the init/db-migration service.

I think this ^ will be problematic as the v1 service will still be running when the v2 init/db-migration service makes its schema changes.

--

Thanks!

matthewhembree avatar Aug 16 '22 17:08 matthewhembree

+1

Running Django with a celery worker service. Also have different dev and prod image versions between these environments, and these end up using the same repository which I don't see as ideal.

a-cid avatar Feb 07 '24 15:02 a-cid