scar
scar copied to clipboard
How to call SCAR-created functions from other Lambda functions (ie. not CLI)
In general, how can I call a SCAR deployed function with another Lambda function while passing in parameters and getting a result back?
My specific use case:
Normally, when you build a deployment package in Python with dependencies, you create the package in a Docker container (ie. install dependencies, COPY
lambda_function.py in, and zip it all up), then docker cp
the zipped deployment package out of the container and upload it to AWS. This would normally be done on a virtual machine.
I'm trying to use Lambda for all my computation, including generating the deployment packages. So I'm using SCAR to allow me to run the Docker container on AWS Lambda.
I've created a Dockerfile that generates a deployment package for a new Python Lambda function. (That function does some performance testing on the ciso8601 Python library)
Dockerfile:
FROM amazonlinux:latest
WORKDIR /app
# Install Python 3.6 + YUM dependencies
RUN yum install -y https://centos6.iuscommunity.org/ius-release.rpm && \
yum install -y gcc zip python36u python36u-devel python36u-pip && \
pip3.6 install -U pip
# Install
RUN mkdir -p package && \
pip3.6 install pytz -t package/
COPY lambda_function.py /app/package/lambda_function.py
# Install test script
COPY create_package.py /app/create_package.py
CMD python3.6 create_package.py
create_package.py:
import base64
import json
import os
import subprocess
def create_package(library):
subprocess.run(["pip", "install", library, "-t", "package"], check=True)
cwd = os.getcwd()
os.chdir("/app/package")
subprocess.run(["zip", "-r", "deployment_package.zip", "."], check=True)
with open("deployment_package.zip", 'rb') as fin:
zip_file_bytes = fin.read()
payload = base64.encodebytes(zip_file_bytes)
os.chdir(cwd)
return json.dumps({"payload": payload.decode("utf-8")})
if __name__ == '__main__':
create_package("ciso8601")
I pushed the image to Docker Hub and ran scar init
to create the Lambda function.
In the AWS Web UI, I can run the function and it seems to generate the deployment package successfully
Now, I have some questions about I/O. I want to call this function from another Lambda function (which will take the resulting zip file and deploy it as a Lambda function). How can I return/output the deployment package I generated as the response of the SCAR function to the caller?
Ideally, I could output the Base64 encoded deployment package as part of a JSON response:
{
payload: "UEsDBAoAAAAAAPh4eUwAAAAAAAAAAAAAAAAMABwAYXBwL3BhY2thZ2UvVVQJA..."
}
Secondly, I would like to parameterize the library that gets installed in the create_package.py
script (so I can generate deployment packages for libraries other than ciso8601
). I guess that this would require passing JSON given to the Lambda call into the Dockerfile CMD
line.
I know that you seem to have support for both parameters and JSON output through the scar run
comand line. Are there comparable methods for doing this via a call from another Lambda function?
Hi, unfortunately SCAR doesn't allow to launch a lambda function from another lambda function (although we are working on it).
Another way to do what you want with SCAR would be using the event-drivent file-processing model. To do that you also need an S3 bucket.
Instead of defining a python code I would use a bash script (something like this, py-package.sh):
#!/bin/bash
OUTPUT_DIR="/tmp/$REQUEST_ID/output"
PY_PACKAGE=`cat $SCAR_INPUT_FILE`
echo "SCRIPT: Packaging the $PY_PACKAGE library"
pip install library -t package
cd /app/package && zip -r "$PY_PACKAGE"_deployment_package.zip .
mv /app/package/"$PY_PACKAGE"_deployment_package.zip $OUTPUT_DIR/
This way when you upload a file with the package name inside to the input/
folder of the connected s3 bucket, the function will automatically create the package and upload it to the output/
folder of the bucket. This also solves the problem of the generic package definition, inside the file you can put the name of any package and it will be processed.
The Dockerfile could be simplified a little:
FROM amazonlinux:latest
WORKDIR /app
# Install Python 3.6 + YUM dependencies
RUN yum install -y https://centos6.iuscommunity.org/ius-release.rpm && \
yum install -y gcc zip python36u python36u-devel python36u-pip && \
pip3.6 install -U pip
# Install
RUN mkdir -p package && \
pip3.6 install pytz -t package/
For the example that you propose, I would initialize the lambda function with SCAR like this:
scar init -n py-packager -s py-package.sh -i repo/image -es your-s3-bucket
and then upload a file (for example echo ciso8601 > py-test
) to the input/
folder of the bucket your-s3-bucket
. After the lambda function packages the file, you can collect the zip file in the output/ciso8601-deployment-package.zip
path.
To finish you can create a lambda function linked to the ouput/ folder, so when a zip file is created another function is launched (we are also working in allowing this with SCAR).
Hope this can help you a little.
Regards
Thank you for such a detailed and customized response. I'll give that a shot. Nice to hear that a more direct way of interfacing is in the works.
Hi, in PR #220 I added a parameter that allows to communicate two lambda functions.
Now, inside the container you have an environment variable called 'LAMBDA_OUTPUT_FILE' that points to a file that will be passed to the second function as the event, so you only have to write in that file the values that you want to pass.
The initialization of this function could be:
scar init -n py-packager -lo manage-pkg -i repo/image
and inside your code something like:
with open(os.environ['LAMBDA_OUTPUT_FILE'], 'w') as fout:
fout.write(json.dumps({"payload": payload.decode("utf-8")}))
Have in mind that the function manage-pkg
must be previously defined.