scar
scar copied to clipboard
Allow Input SandBox Files
I hit the 24-KB limit when defining a script to be executed in AWS Batch.
We could workaround this by supporting an input sandbox of files that is specified in the YAML file, is automatically upload to a specific folder in the S3 bucket and is automatically retrieved by the SCAR supervisor before executing the script.
From the top of my head:
functions:
my-function:
image: org/repo
memory: 128
execution_mode: batch
s3:
input_bucket: my-bucket
init_script: my-script
input_sandbox:
- /path/to/local/file-or-folder
This would cause to upload the files in /path/to/local/file-or-folder
into s3://my-bucket/my-function/sandbox
and define an environment variable when creating the Lambda function that this folder exists.
The supervisor would check this variable (which should be made available to the AWS Batch job as well) and retrieve whatever the contents of that S3 folder contains into the local file system so that the script running in the container can access these files. The local directory in the running container on which the files where retrieved should be made available to the script as an environment variable such as: $SCAR_INPUT_SANDBOX or simply be made available in the same local directory on which