for-aws
for-aws copied to clipboard
Manually created volume snapshots get deleted
Expected behavior
It should be possible to manually create snapshots of EBS volumes managed by CloudStor without them getting deleted over time.
Actual behavior
When manually creating a snapshot of an EBS volume that has been created by CloudStor, it's being deleted after ~15 minutes.
Information
I tested the behaviour with templates based on 17.06.2-ce and 17.09.0-ce and the following compose setup:
version: '3'
services:
db:
image: postgres:9.4
volumes:
- cloudstor-ebs-volume:/var/lib/postgresql/data
ports:
- 5432:5432
environment:
- PGDATA=/var/lib/postgresql/data/db-files/
deploy:
placement:
constraints: [node.role == worker]
volumes:
cloudstor-ebs-volume:
driver: cloudstor:aws
driver_opts:
backing: relocatable
size: 4
ebstype: gp2
I dug a bit deeper and found the following event of the snapshot's deletion which has been issued by one of the worker nodes.
{
"eventVersion":"1.05",
"userIdentity":{
"type":"AssumedRole",
"principalId":"AROAI4YFOR4EWVHKYJ3YW:i-0d1c416fde2da3248",
"arn":"arn:aws:sts::286248583856:assumed-role/cloudstorDeleteTest-WorkerRole-EI5JVZ92E06D/i-0d1c416fde2da3248",
"accountId":"286248583856",
"accessKeyId":"xxx",
"sessionContext":{
"attributes":{
"mfaAuthenticated":"false",
"creationDate":"2017-11-23T09:06:17Z"
},
"sessionIssuer":{
"type":"Role",
"principalId":"AROAI4YFOR4EWVHKYJ3YW",
"arn":"arn:aws:iam::286248583856:role/cloudstorDeleteTest-WorkerRole-EI5JVZ92E06D",
"accountId":"286248583856",
"userName":"cloudstorDeleteTest-WorkerRole-EI5JVZ92E06D"
}
}
},
"eventTime":"2017-11-23T11:39:31Z",
"eventSource":"ec2.amazonaws.com",
"eventName":"DeleteSnapshot",
"awsRegion":"eu-west-1",
"sourceIPAddress":"52.30.180.118",
"userAgent":"aws-sdk-go/1.8.22 (go1.7.6; linux; amd64)",
"requestParameters":{
"snapshotId":"snap-0dae7cc12c7c7ab3b",
"force":false
},
"responseElements":{
"_return":true
},
"requestID":"41876da2-2222-43b7-be8f-96603ba3e8b3",
"eventID":"527eecd8-41d6-407d-a567-d470ae12a16a",
"eventType":"AwsApiCall",
"recipientAccountId":"286248583856"
}
Steps to reproduce the behavior
- Deploy a fresh swarm from the cloudformation template
- Deploy a new docker stack based on the given compose file
- Create a manual snapshot of the abs volume that cloudstor created using AWS management console or sdk
- Wait ~15mins
- Cloudstor will have deleted the manual snapshot and only snapshots created by cloudstor from 5 and 10 minutes ago remain.
Thanks for reporting this. To address your scenario, cloudstor
can potentially tag all it's snapshots and only delete those during it's periodic cleanups.
I was curious about your use case: are you trying to come up with a backup procedure for the cloudstor
EBS volumes?
Thanks for looking into this. Tagging cloudstor snapshots does sound like the best way to go about this.
Yes, I am indeed evaluating possibilities to take periodic backups of our data volumes. I encountered the reported issue while testing a CloudWatch/Lambda-based approach to take daily snapshots of the cloudstor volumes.
I am also curious about snapshotting / backing up volumes in general and on AWS specifically.
I found a related proposal for adding such a feature directly to docker engine.
There is also the rancher/convoy project that provides another docker volume plugin with different backends but I guess this is not intended to work with cloudstor
. Or do you know better?
@sfrese I am curious about your CloudWatch/Lambda-based approach for snapshotting volumes. Did it work out for you and have you shared it somewhere?
@ddebroy How can I configure cloudstor
to tag the periodic snapshots and to only delete the tagged snapshots?
@ddebroy Is the tagging an existing feature? If yes, how would I configure it?
@therealppa You can add additional driver options, such as the following in a compose-file format:
driver_opts:
ebs_tag_Name: 'my-cloudstor'
@FrenchBen I've added the tag ebs_tag_Name
as a driver option during volume creation, however when I create a manual snapshot of an existing cloudstor ebs volume it gets removed.
I'm using Docker4AWS version 18.03-ce stock template version with your own vpc
@sfrese did you find a workaround for this issue? I'm facing the same scenario you did
ebs_tag_Name
seem to affect only volume names, not snapshot ones. Is there an option to get cloudstor created snapshots tagged?
I guess the only workaround is to tune AWS policies to allow deleting snapshots with specific tags set.
Workaround:
{
"Effect": "Allow",
"Resource": "arn:aws:ec2:*:*:snapshot/*",
"Action": [
"ec2:DeleteSnapshot"
],
"Condition": {
"StringNotLike": {
"ec2:ResourceTag/Name": "*"
}
}
}
]
From what i can see the only identifier that cloudstor could use for snapshot cleanup is the "Volume ID". Obviously this cannot be changed for snapshots that are created manually to enable them to be ommitted from cleanup.
I was looking to create nightly snapshots for recovery / DR purposes. For the moment i can try the workaround suggested by hryamzik Has anyone tried this? I was just wondering if it causes any issue with cloudstor when it fails to delete snapshots?
Adding the tag to the snapshots and using it for cleanup would be a nice feature to resolve this issue.
@akumadare I can confirm that the workaround suggested by hryamzik is working, I used a stricter pattern for the resource tag name of course, so all other snaphots are still deleted.
Hi, Thanks for the responses. I struggled trying to get the IAM policy right when i was testing using aws cli- the wildcard in the condition didnt seem to work for me and i had to set it to a value in the end:
{
"Effect": "Allow",
"Action": [
"ec2:DeleteSnapshot"
],
"Resource": "arn:aws:ec2:*:*:snapshot/*",
"Condition": {
"StringNotEquals": {
"ec2:ResourceTag/Protected": "true"
}
}
}
I have now created a Python 3.6 Lambda to create the snapshots - can post up the source code if anyone is interested.