geostore
geostore copied to clipboard
SpikeSpeed up creating Lambdas
Enabler
So that deployment is faster, we want to use a lighter base Docker image.
Acceptance Criteria
- [ ] …
- [ ] …
Additional context
After a cdk deploy
my system has the following Docker images:
$ docker image list --format "table {{.Repository}}\t{{.Tag}}\t{{.Size}}"
REPOSITORY TAG SIZE
cdkasset-096b664587e2899d64970e5a40bf4800c93c5a2ac216d776d863a9f60435d343 latest 203MB
702361495692.dkr.ecr.ap-southeast-2.amazonaws.com/cdk-hnb659fds-container-assets-702361495692-ap-southeast-2 096b664587e2899d64970e5a40bf4800c93c5a2ac216d776d863a9f60435d343 203MB
<none> <none> 315MB
cdk-3c1a1cccfe292b929bd97cb055c5988abee350ba6730bbf7671514018b8509de latest 2.38GB
cdk-51f6268c48cc1706733e0db369b67c4185cfdcd805aa7ee97f7e4573594b04dd latest 2.38GB
ubuntu 22.04 77.8MB
public.ecr.aws/sam/build-python3.9 latest 2.29GB
sportradar/aws-azure-login 2021062807125386530a 1.19GB
Tasks
- [ ] Check whether we can use smaller base images, especially the Python 3.9 one
- [ ] Check whether we can package our Lambdas using Zip rather than Docker (probably not, if we want to include heaps of packages)
- [ ] Check if there's a way to simplify/speed up/get rid of
infrastructure/constructs/lambda_layers/botocore
Have been thinking about this a bit, as it is one of my pain points with geostore. I think it is possible to speed things up by sourcing the lambda code from s3 lambda.Code.fromBucket(bucket, key[, objectVersion])
; however, where I am stuck at is how do we maintain this and how do we orchestrate the re-bundling of code when there is a lambda code change. Will continue to ponder, but yea, rebundling lambdas each time we do a cdk synth
or cdk deploy
seems wasteful and time consuming.