ship icon indicating copy to clipboard operation
ship copied to clipboard

Enchancement: Flag to store state in S3

Open dexhorthy opened this issue 5 years ago • 2 comments

Overview

Right now, the state.json is stored in the filesystem by default (at .ship/state.json). We also have an option to store state remotely in a K8s secret. I'd like to propose that we discuss what it would take to get support for storing state remotely in an S3 bucket.

Proposal

We have state storage in a Kubernetes Secret for the init, edit, update, and watch commands

ship init ${UPSTREAM} \
    --state-from=secret \
    --secret-namespace=default \
    --secret-name=shipstate \
    --secret-key=state.json

Similarly, I'd like to propose adding the following flags for state storage in a remote S3 bucket

ship init ${UPSTREAM} \
    --state-from=s3 \
    --s3-bucket=shipapps \
    --s3-key=/some-app/state.json

Implementation questions

  • unless it means a lot of extra work, we should probably ensure this works with any s3-compliant blob storage backend (what additional flags do we need to support this?)
  • open question: how many writes to S3 will we be doing? Will there be a need to optimize this to batch updates? (My guess is no)
  • open question: we should probably load default AWS creds from the env, but do we also need to support overriding them with ship flags?

dexhorthy avatar Jun 16 '19 18:06 dexhorthy

this is necessary for us since our terraform env setup steps create the k8s cluster. so saving the state in k8s is not useful since it won't survive an environment reset.

gabesmed avatar Jun 17 '19 17:06 gabesmed

This was added in v0.47.0, with a more performant version included in v0.48.0. (Well, a general "upload to URL/download from URL" implementation - getting the presigned URLs is not yet handled by ship)

I'll leave the issue open for now as while this may cover many of the same use cases, it isn't completely resolved.

laverya avatar Jul 11 '19 00:07 laverya