chalice
chalice copied to clipboard
Support for container images
Is it currently possible or is it planned to support custom containers?
One of the limitations that seems to appear again and again is the 250MB limit of lambda package. This new feature seems to solve that! (in addition to other use cases)
Yeah this is great! There are some additional requirements needed to make this work (ECR repo along with pushing updated images) so we'd need to figure out the best way to make this work without requiring a lot of management overhead. But I'm marking this as a feature request, and if anyone else would like this, please feel free to vote (:+1:) for this issue.
I would be interested in help out with this. A couple of things which should be considered:
- configurable ECR, we should be able to pass in existing but have Chalice create one if there is a Dockerfile present or something.
- Permission on running / configuring container workloads.
Also how are people doing this currently? Terraform?
Are there any news on this? I am also struggling with the 250MB limit restriction
I managed to patch chalice==1.23.0
to support Lambda Image,
it should be compatible with latest version of Chalice too
It works, replacing 250 MB limit of ZipFile
with 10 GB limit of ImageUri
Not sure I find time to create a PR from this patch, so please either create this PR yourself, or wait for someone to do it, or just use this patch as is right now
-
lambda-image/Dockerfile
:
FROM public.ecr.aws/lambda/python:3.8
# Sequence of commands creates layers of cache,
# so that updating app code does not rebuild requirements
RUN pip3 install pip==21.1.1 wheel==0.36.2
COPY requirements.txt ${LAMBDA_TASK_ROOT}/
RUN pip3 install -r requirements.txt
COPY app.py python ${LAMBDA_TASK_ROOT}/
CMD ["app.app"]
-
lambda-image/awsclient-append.py
:
# patch:
def _get_image(function_name):
# You may return `None` to use `ZipFile` depending on `function_name`,
# or per-project by `CHALICE_LAMBDA_IMAGE=YES chalice deploy --stage $STAGE`
if os.environ.get("CHALICE_LAMBDA_IMAGE") != "YES":
return None
parts = function_name.split("-")
if len(parts) == 2:
app_name, stage = parts
handler_name = "app"
elif len(parts) == 3:
app_name, stage, handler_name = parts
else:
app_name = stage = handler_name = None
AWS_ACCOUNT_ID = os.environ["AWS_ACCOUNT_ID"]
AWS_REGION = os.environ["AWS_REGION"]
image_name = f"{app_name}-{stage}" if app_name and stage else function_name
uri = f"{AWS_ACCOUNT_ID}.dkr.ecr.{AWS_REGION}.amazonaws.com/{image_name}:latest"
cmd = [f"app.{handler_name}"] if handler_name else None
return {"uri": uri, "cmd": cmd}
def _create_lambda_function(self, api_args):
# type: (Dict[str, Any]) -> Tuple[str, str]
image = _get_image(api_args["FunctionName"])
if image:
del api_args["Code"]["ZipFile"]
del api_args["Handler"]
del api_args["Runtime"]
api_args["Code"]["ImageUri"] = image["uri"]
api_args["PackageType"] = "Image"
if image["cmd"]:
api_args["ImageConfig"] = {"Command": image["cmd"]}
try:
result = self._call_client_method_with_retries(
self._client("lambda").create_function,
api_args,
max_attempts=self.LAMBDA_CREATE_ATTEMPTS,
)
return result["FunctionArn"], result["State"]
except _REMOTE_CALL_ERRORS as e:
context = LambdaErrorContext(
api_args["FunctionName"],
"create_function",
0 if image else len(api_args["Code"]["ZipFile"]),
)
raise self._get_lambda_code_deployment_error(e, context)
def _update_function_code(self, function_name, zip_contents):
# type: (str, str) -> Dict[str, Any]
image = _get_image(function_name)
api_args = {"ImageUri": image["uri"]} if image else {"ZipFile": zip_contents}
lambda_client = self._client("lambda")
try:
result = lambda_client.update_function_code(
FunctionName=function_name, **api_args
)
except _REMOTE_CALL_ERRORS as e:
context = LambdaErrorContext(
function_name,
"update_function_code",
0 if image else len(zip_contents),
)
raise self._get_lambda_code_deployment_error(e, context)
if result["LastUpdateStatus"] != "Successful":
self._wait_for_function_update(function_name)
return result
def _do_update_function_config(self, function_name, kwargs):
# type: (str, Dict[str, Any]) -> None
if _get_image(function_name):
del kwargs["Runtime"]
_old_do_update_function_config(self, function_name, kwargs)
_old_do_update_function_config = TypedAWSClient._do_update_function_config
TypedAWSClient._create_lambda_function = _create_lambda_function
TypedAWSClient._update_function_code = _update_function_code
TypedAWSClient._do_update_function_config = _do_update_function_config
- Before other imports in
app.py
:
#
# fix import
#
# See https://github.com/aws/aws-lambda-base-images/issues/8
# resulting in /var/lang/lib/python3.8/site-packages/requests/__init__.py:89:
# RequestsDependencyWarning: urllib3 (1.26.2) or chardet (3.0.4)
# doesn't match a supported version!
import sys
import pkg_resources
site_packages = "/var/lang/lib/python3.8/site-packages"
try:
sys.path.remove(site_packages)
except ValueError:
pass
else:
sys.path.insert(0, site_packages)
for dist in pkg_resources.find_distributions(site_packages, True):
pkg_resources.working_set.add(dist, site_packages, False, replace=True)
- If you need a writable directory:
DATA_DIR = "/tmp" if ENV == LOCAL else os.environ.get("LAMBDA_TASK_ROOT", "/var/task")
- Relevant part of the deploy script:
#!/bin/bash
set -euf
IFS=$'\n\t'
# ...
IMAGE_NAME=${PROJECT_ID}_chalice-$ENV
echo "Building lambda image $IMAGE_NAME ..."
CHALICE_PACKAGE_DIR=$(python -c 'import chalice; print(chalice.__path__[0])')
CHALICE_APP_DIR=$(pwd)
IMAGE_DIR=$CHALICE_APP_DIR/lambda-image
IMAGE_BUILD_DIR=$IMAGE_DIR/build
rm -Rf $IMAGE_BUILD_DIR
mkdir $IMAGE_BUILD_DIR
cp $IMAGE_DIR/Dockerfile $IMAGE_BUILD_DIR/
pushd $IMAGE_BUILD_DIR
mkdir -p python/chalice
cp $CHALICE_PACKAGE_DIR/{__init__.py,app.py} python/chalice/
cp -R $CHALICE_APP_DIR/chalicelib python/
cp -R $CHALICE_APP_DIR/{app.py,requirements.txt} ./
docker build -t $IMAGE_NAME .
popd
echo "Reading ECR repo $IMAGE_NAME ..."
aws ecr describe-repositories --repository-names $IMAGE_NAME &>/dev/null || {
echo "Creating ECR repo $IMAGE_NAME ..."
aws ecr create-repository --repository-name $IMAGE_NAME >/dev/null
}
echo "Refreshing ECR token of docker..."
ECR_DOMAIN=$AWS_ACCOUNT_ID.dkr.ecr.$AWS_REGION.amazonaws.com
aws ecr get-login-password | docker login --username AWS --password-stdin $ECR_DOMAIN
echo "Tagging and pushing the image..."
IMAGE_TAG=$IMAGE_NAME:latest
ECR_TAG=$ECR_DOMAIN/$IMAGE_TAG
docker tag $IMAGE_TAG $ECR_TAG
docker push $ECR_TAG
echo "Reading Chalice lambda-image patch..."
FILE_NAME_TO_PATCH=$CHALICE_PACKAGE_DIR/awsclient.py
grep -Fq 'TypedAWSClient._create_lambda_function' $FILE_NAME_TO_PATCH || {
echo "Patching $FILE_NAME_TO_PATCH ..."
cat lambda-image/awsclient-append.py >> $FILE_NAME_TO_PATCH
}
echo "Temporary updating requirements.txt ..."
mv requirements.txt requirements-backup.txt
# We just need minimal requirements to satisfy Chalice that is importing app on build:
grep -E "boto3|sentry" requirements-backup.txt > requirements.txt
# `try-finally` to always restore from `requirements-backup.txt`:
DEPLOY_FAILED=NO
echo "Deploying Chalice to ENV=$ENV..."
CHALICE_LAMBDA_IMAGE=YES chalice deploy --stage $ENV || DEPLOY_FAILED=YES
echo "Restoring requirements.txt ..."
mv requirements-backup.txt requirements.txt
rm -Rf $IMAGE_BUILD_DIR
[[ $DEPLOY_FAILED == YES ]] && exit 1
-
Lambdas created as
ZipFile
can not be switched toImageUri
, so before running the deploy script above, please delete old lambdas from:- "Lambda triggers" tab in each SQS, etc
- Lambda functions list
-
resources
in.chalice/deployed/$ENV.json
-
Make sure
automatic_layer
is disabled or absent in.chalice/config.json
, as it is neither compatible nor needed with Lambda Image
Still a newbie with aws's sam
but I'm leaning toward using it for deploying chalice containers. Obviously official support of containers from chalice itself would be preferred but until then this approach doesn't seem too bad. Here's a minimal example for deploying chalice containers via sam, which I like for its local container testing, management of ecr, patching config files rather than source code:
chalice new-project hello-world && cd hello-world
# Dockerfile for sam build
cat << EOF > ./Dockerfile
FROM public.ecr.aws/lambda/python:3.8
COPY requirements.txt ./
RUN python3.8 -m pip install -r requirements.txt -t .
COPY deployment/ ./
CMD ["app.app"]
EOF
# generate a deployment folder; its contents copied to docker image
chalice package .
rm -r deployment
mkdir deployment
unzip deployment.zip -d deployment
# patch the sam.json to use image-based fields rather than zipfile-based fields
# in the api handler lambda
cat sam.json | jq '
del(.Resources.APIHandler.Properties.Runtime)
| del(.Resources.APIHandler.Properties.Handler)
| del(.Resources.APIHandler.Properties.CodeUri)
| .Resources.APIHandler.Properties += {PackageType : "Image"}
| .Resources.APIHandler.Metadata.Dockerfile += "Dockerfile"
| .Resources.APIHandler.Metadata.DockerContext += "."
| .Resources.APIHandler.Metadata.DockerTag += "python3.8-v1"' > sam.img.json
sam validate -t sam.img.json
sam build -t sam.img.json
sam local start-api # verify behavior locally
sam deploy --guided
@jamesls looks like container support is about ready. What else is needed?
@jamesls Any idea how long this feature is going to take to be released? I can't count the amount of times I had to migrate a chalice lambda to something else just because I needed a native lib or using pandas and then needs a docker image due to size. Please please please we need this feature!!.
Still a newbie with aws's
sam
but I'm leaning toward using it for deploying chalice containers. Obviously official support of containers from chalice itself would be preferred but until then this approach doesn't seem too bad. Here's a minimal example for deploying chalice containers via sam, which I like for its local container testing, management of ecr, patching config files rather than source code:chalice new-project hello-world && cd hello-world # Dockerfile for sam build cat << EOF > ./Dockerfile FROM public.ecr.aws/lambda/python:3.8 COPY requirements.txt ./ RUN python3.8 -m pip install -r requirements.txt -t . COPY deployment/ ./ CMD ["app.app"] EOF # generate a deployment folder; its contents copied to docker image chalice package . rm -r deployment mkdir deployment unzip deployment.zip -d deployment # patch the sam.json to use image-based fields rather than zipfile-based fields # in the api handler lambda cat sam.json | jq ' del(.Resources.APIHandler.Properties.Runtime) | del(.Resources.APIHandler.Properties.Handler) | del(.Resources.APIHandler.Properties.CodeUri) | .Resources.APIHandler.Properties += {PackageType : "Image"} | .Resources.APIHandler.Metadata.Dockerfile += "Dockerfile" | .Resources.APIHandler.Metadata.DockerContext += "." | .Resources.APIHandler.Metadata.DockerTag += "python3.8-v1"' > sam.img.json sam validate -t sam.img.json sam build -t sam.img.json sam local start-api # verify behavior locally sam deploy --guided
You missed Resources.APIHandler.Properties.ImageUri
@r0lodex ImageUri gets ignored if you're using the metadata fields like I am. The metadata specifies the docker file location and builds the image locally. It somehow maintains a reference to the image built and deploys it to ecr.
If you want to use imageuri to specify the image directly instead make sure to remove the metadata fields since they take precedence over imageuri.
Any updates on this issue?
Here's an update to @areeves87 POC:
cat sam.json | jq '
del(.Resources.APIHandler.Properties.Runtime)
| del(.Resources.APIHandler.Properties.Handler)
| del(.Resources.APIHandler.Properties.CodeUri)
| .Resources.APIHandler.Properties += {PackageType : "Image"}
| .Resources.APIHandler.Metadata.Dockerfile += "Dockerfile"
| .Resources.APIHandler.Metadata.DockerContext += "."
| .Resources.APIHandler.Metadata.DockerTag += "python3.9-v1"
| .Resources.RestAPI.Properties.EndpointConfiguration = {"Type": .Resources.RestAPI.Properties.EndpointConfiguration}' > sam.img.json
Can you please update to advise if support for custom containers will be coming soon? Thank you
First of all, thanks for building such a great and robust project. I am just adding a comment here to remind us how important this feature request is.
Adding to the chorus here. I just kicked off a Chalice project hat is dead before it started due to this package size limit. There's nothing technically holding this back and there's an obvious need.
I have a working solution that publishes the Chalice App in a Container using SAM. I will confirm with my employer, after our code review today, if I am clear to share the Python source but the process is straightforward.
- "chalice package /path/to/dist"
- Unpack the deployment.zip into it's own folder (let's call it /path/to/dist/docker-build) and add a 3-line Dockerfile FROM the Python3.8 Lambda Base Image, copying the current directory into the LAMBDA_TASK_ROOT and CMD ["app.app"] as is posted earlier in this thread.
- Change the APIHandler part of the sam.json removing the Handler and CodeUri, Runtime and Handler and adding a PackageType of "Image" with MetaData for the DockerTag, Dockerfile and DockerContext (the path to your unpacked deployment.zip with it's Dockerfile.
- Run the "sam build -t sam.json"
- Run the "sam deploy --resolve-image-repos --resolve-s3" (the template.yml the "sam build" generates is in a .aws-sam folder.
I packaged that process into a single-file Python module that I'll share in a Gist if I'm cleared, but regardless, I was able to implement this in the time it took me to make that comment yesterday to the time I'm writing thi snow. Less than 24 hours. Your team can use that information to do the same thing, I imagine, and you'd make a lot of your customer developers much happier. This framework is AWESOME and I hate to see it tanked from a self-imposed handicap. I'm going to roll with this solution, but I'd be much happier to see it merged into the main project.
I have a working solution that publishes the Chalice App in a Container using SAM. I will confirm with my employer, after our code review today, if I am clear to share the Python source but the process is straightforward.
- "chalice package /path/to/dist"
- Unpack the deployment.zip into it's own folder (let's call it /path/to/dist/docker-build) and add a 3-line Dockerfile FROM the Python3.8 Lambda Base Image, copying the current directory into the LAMBDA_TASK_ROOT and CMD ["app.app"] as is posted earlier in this thread.
- Change the APIHandler part of the sam.json removing the Handler and CodeUri, Runtime and Handler and adding a PackageType of "Image" with MetaData for the DockerTag, Dockerfile and DockerContext (the path to your unpacked deployment.zip with it's Dockerfile.
- Run the "sam build -t sam.json"
- Run the "sam deploy --resolve-image-repos --resolve-s3" (the template.yml the "sam build" generates is in a .aws-sam folder.
I packaged that process into a single-file Python module that I'll share in a Gist if I'm cleared, but regardless, I was able to implement this in the time it took me to make that comment yesterday to the time I'm writing thi snow. Less than 24 hours. Your team can use that information to do the same thing, I imagine, and you'd make a lot of your customer developers much happier. This framework is AWESOME and I hate to see it tanked from a self-imposed handicap. I'm going to roll with this solution, but I'd be much happier to see it merged into the main project.
That's awesome! Do a PR please. Looking forward. Hopefully it's a green light from your employer.
Greetings @r0lodex and others who may happen upon this thread. I received approval from my employer to share a Gist of the deployer I've been developing and using. A link is below. For now, I'm using it as a stand-alone script. You'd mentioned making a PR. I'm happy to package it as an addition to the Chalice CLI using the same process as the deployer I have here, or I started looking into the actual Python code that performs the deployment and could look at wiring this functionality directly into the product. That would be a larger, and longer running, commitment but I'm open to discussing taking it on if there's a need and there's not already a feature under development.
https://gist.github.com/RogerWebb/99e93ae29bbe36e612ed9da62c62e54f
Greetings @r0lodex and others who may happen upon this thread. I received approval from my employer to share a Gist of the deployer I've been developing and using. A link is below. For now, I'm using it as a stand-alone script. You'd mentioned making a PR. I'm happy to package it as an addition to the Chalice CLI using the same process as the deployer I have here, or I started looking into the actual Python code that performs the deployment and could look at wiring this functionality directly into the product. That would be a larger, and longer running, commitment but I'm open to discussing taking it on if there's a need and there's not already a feature under development.
https://gist.github.com/RogerWebb/99e93ae29bbe36e612ed9da62c62e54f
You're the best. Will try this in my next one coming up. My existing projects are already too deep, wouldn't risk it for now. I'll feedback and contribute best I can.
Keep it up! Cheers!
Hi, this would be amazing for using Chalice to deploy machine learning APIs - any word on whether this was going to be officially supported?
+1 on this being very helpful
I've had several cases recently where this would have been incredibly helpful. Would love to see this be supported, especially since Container Images have been a lambda feature for a few years now
Earlier this year, we discovered that there is a way to deploy Chalice as Container functions relatively easily via CDK, and I will post the work we have put together once I get time.
Basically the Chalice CDK stack creates an SAM template, and we can easily mimic the same and change the definition to use an image URL. Then just need to add an ECR configuration in CDK to tie the loose ends.
import json
import os
import uuid
from typing import List, Tuple, Dict, Optional, Any
from aws_cdk import (
aws_ecr as ecr,
aws_s3 as s3,
cloudformation_include,
aws_iam as iam,
aws_lambda as lambda_,
)
try:
from aws_cdk.core import Construct
from aws_cdk import core as cdk
except ImportError:
import aws_cdk as cdk
from constructs import Construct
from chalice import api
class ChaliceDocker(Construct):
"""Chalice construct for CDK.
Packages the application into AWS SAM format and imports the resulting
template into the construct tree under the provided ``scope``. It will use
the referenced docker image from ECR to configure the lambda functions.
"""
# pylint: disable=redefined-builtin
# The 'id' parameter name is CDK convention.
def __init__(self,
scope, # type: Construct
id, # type: str
source_dir, # type: str
ecr_repo: ecr.Repository, # type: ecr.Repository
stage_config=None, # type: Optional[Dict[str, Any]]
preserve_logical_ids=True, # type: bool
**kwargs # type: Dict[str, Any]
):
# type: (...) -> None
"""Initialize Chalice construct.
:param str source_dir: Path to Chalice application source code.
:param dict stage_config: Chalice stage configuration.
The configuration object should have the same structure as Chalice
JSON stage configuration.
:param bool preserve_logical_ids: Whether the resources should have
the same logical IDs in the resulting CDK template as they did in
the original CloudFormation template file. If you're vending a
Construct using cdk-chalice, make sure to pass this as ``False``.
Note: regardless of whether this option is true or false, the
:attr:`sam_template`'s ``get_resource`` and related methods always
uses the original logical ID of the resource/element, as specified
in the template file.
:raises `ChaliceError`: Error packaging the Chalice application.
"""
super(ChaliceDocker, self).__init__(scope, id, **kwargs)
#: (:class:`str`) Path to Chalice application source code.
self.source_dir = os.path.abspath(source_dir)
self.ecr_repo = ecr_repo
#: (:class:`str`) Chalice stage name.
#: It is automatically assigned the encompassing CDK ``scope``'s name.
self.stage_name = scope.to_string()
#: (:class:`dict`) Chalice stage configuration.
#: The object has the same structure as Chalice JSON stage
#: configuration.
self.stage_config = stage_config
print(self.stage_config)
chalice_out_dir = os.path.join(os.getcwd(), 'chalice.out')
package_id = uuid.uuid4().hex
self._sam_package_dir = os.path.join(chalice_out_dir, package_id)
self._package_app()
sam_template_filename = self._generate_sam_template_with_assets(
chalice_out_dir, package_id)
#: (:class:`aws_cdk.cloudformation_include.CfnInclude`) AWS SAM
#: template updated with AWS CDK values where applicable. Can be
#: used to reference, access, and customize resources generated
#: by `chalice package` commandas CDK native objects.
self.sam_template = cloudformation_include.CfnInclude(
self, 'ChaliceApp', template_file=sam_template_filename,
preserve_logical_ids=preserve_logical_ids)
self._function_cache = {} # type: Dict[str, lambda_.IFunction]
self._role_cache = {} # type: Dict[str, iam.IRole]
def _package_app(self):
# type: () -> None
api.package_app(
project_dir=self.source_dir,
output_dir=self._sam_package_dir,
stage=self.stage_name,
chalice_config=self.stage_config,
)
def _generate_sam_template_with_assets(self, chalice_out_dir, package_id):
# type: (str, str) -> str
sam_template_path = os.path.join(self._sam_package_dir, 'sam.json')
sam_template_with_assets_path = os.path.join(
chalice_out_dir, '%s.sam_with_assets.json' % package_id)
with open(sam_template_path) as sam_template_file:
sam_template = json.load(sam_template_file)
for function_logical_id, function in self._filter_resources(
sam_template, 'AWS::Serverless::Function'):
cmd = function['Properties']['Handler']
del function['Properties']['Runtime']
del function['Properties']['CodeUri']
del function['Properties']['Handler']
function['Properties']['PackageType'] = 'Image'
function['Properties']['ImageUri'] = self.ecr_repo.repository_uri + ':latest'
function['Properties']['ImageConfig'] = {
'Command': [cmd],
}
if function_logical_id != 'APIHandler':
# make sure the function has an output
sam_template['Outputs'][f'{function_logical_id}Name'] = {
'Value': {
'Ref': function_logical_id
}
}
sam_template['Outputs'][f'{function_logical_id}Arn'] = {
'Value': {
'Fn::GetAtt': [function_logical_id, 'Arn']
}
}
with open(sam_template_with_assets_path, 'w') as f:
f.write(json.dumps(sam_template, indent=2))
return sam_template_with_assets_path
def _filter_resources(self, template, resource_type):
# type: (Dict[str, Any], str) -> List[Tuple[str, Dict[str, Any]]]
return [(key, value) for key, value in template['Resources'].items()
if value['Type'] == resource_type]
def get_resource(self, resource_name):
# type: (str) -> cdk.core.CfnResource
return self.sam_template.get_resource(resource_name)
def get_role(self, role_name):
# type: (str) -> iam.IRole
if role_name not in self._role_cache:
cfn_role = self.sam_template.get_resource(role_name)
# Pylint is incorrectly identifying this as a static method call
# but it's actually decorated as a @builtins.classmethod method.
# pylint: disable=no-value-for-parameter
role = iam.Role.from_role_arn(self, role_name, cfn_role.attr_arn)
self._role_cache[role_name] = role
return self._role_cache[role_name]
def get_function(self, function_name):
# type: (str) -> lambda_.IFunction
if function_name not in self._function_cache:
cfn_lambda = self.sam_template.get_resource(function_name)
arn_ref = cfn_lambda.get_att('Arn')
# Pylint is incorrectly identifying this as a static method call
# but it's actually decorated as a @builtins.classmethod method.
# pylint: disable=no-value-for-parameter
function = lambda_.Function.from_function_arn(
self, function_name, arn_ref.to_string())
self._function_cache[function_name] = function
return self._function_cache[function_name]
def add_environment_variable(self, key, value, function_name):
# type: (str, str, str) -> None
cfn_function = self.sam_template.get_resource(function_name)
cfn_function.add_override(
'Properties.Environment.Variables.%s' % key, value)
@lhr0909 Thanks for this! Is there a plan to release a new version of chalice? It would be really nice to get container images to work within chalice, without having to use sam.
@lhr0909 Thanks for this! Is there a plan to release a new version of chalice? It would be really nice to get container images to work within chalice, without having to use sam.
@thushw I don't work for AWS so I cannot speak for them in terms of support. In terms of the use of SAM, I believe that vanilla Chalice deployment also relies on SAM, and the CDK template they provide just gives more control points on the generated SAM template. I simply extended the Chalice CDK Stack definition the official Chalice SDK to support container images.
On the other hand, there are a few more scripts we needed to add to the deployment process, so when we open-source, I don't think we will introduce that into the main Chalice repo, but rather a standalone template you can fork to set up a CDK-based Chalice deployment that has container image support.
thanks for the context there , @lhr0909 ; I started using sam for the moment, and if chalice uses sam, perhaps it makes sense I stick to that. I'm new to lambdas in general, and the first framework that I wanted to explore was chalice, specially as it was built for python. :)
I put this SAM system into a module awhile back. If you find it useful, feel free to have at it.
https://gist.github.com/RogerWebb/99e93ae29bbe36e612ed9da62c62e54f
I would've loved to use this solution, but unfortunately it doesn't work with the @app.on_s3_event decorators that we use. Here is the specific error message that I get:
NotImplementedError: Unable to package chalice apps that @app.on_s3_event decorator. CloudFormation does not support modifying the event notifications of existing buckets. You can deploy this app using chalice deploy