pipeline-aws-plugin icon indicating copy to clipboard operation
pipeline-aws-plugin copied to clipboard

S3Download wipes C drive if target set as "/"

Open petertweed opened this issue 5 years ago • 8 comments

Description

I wish to download all items from a folder within an S3 bucket on windows into the workspace.
However the job has tried (mainly successfully) deleting everything on my hard disk, destroying the machine.

Code on pipeline (bucket name is sanitised):

        withAWS(region: 'eu-west-2', credentials: 'MyCreds'){
            s3Download(
                file: '/',
                bucket:'myBucket',
                path: 'product/',
                force:true)
        }

Log output:

[Pipeline] withAWS
Constructing AWS CredentialsSetting AWS region eu-west-2 
 [Pipeline] {
[Pipeline] s3Download
Downloading s3://myBucket/product/ to file:/C:/ 
 [Pipeline] }
[Pipeline] // withAWS

Expected behaviour: I expected it to download (and overwrite) files in the workspace, which is: C:\Program Files (x86)\Jenkins\workspace\Project-Windows\Product\DownloadProductS3 I didn't expect it to wipe everything!!!

Environment

Jenkins-Version: 2.204.1

Java-Version: 1.8

Plugin-Version: 1.39

Master/Slave Setup: [Do you run in a master/slave setup?] Ran on Master

petertweed avatar Feb 03 '20 11:02 petertweed

+1 This command erased most of the files in a very important machine ! Can you please fix it ? :(

Command: s3Download(file: "/", bucket: S3_BUCKET, path:'sample_folder', force:true)

Thanks

ronronshaiko avatar Mar 09 '20 14:03 ronronshaiko

the file parameter is an absolute file name on your disk if you start with /. An interesting thing is that your Jenkins has the necessary rights to do this. I would have expected it to fail due to missing permissions.

I am not sure how to "fix" this.

hoegertn avatar Jul 30 '20 14:07 hoegertn

@hoegertn Hey, it happened to me(in linux) but it only deleted the files it had permission. I guess it's worth a warning.

NivardoX avatar Oct 21 '20 18:10 NivardoX

THIS PLUGIN IS ABSOLUTELY DANGEROUS TO USE!

This issue still not fixed? s3Download(file:"${params.projectId}/", bucket : env.S3_BUCKET, path:"${params.projectId}/", force:true)

Please be very very careful!!!! This command erased half of my drive when I forgot to set ${params.projectId} i.e. when the parameter value was "".

P.S. I lost some photos and even whole workspace with tons of different configurations! Jesus Christ!

@hoegertn

RaiaN avatar Dec 13 '23 17:12 RaiaN

the file parameter is an absolute file name on your disk if you start with /. An interesting thing is that your Jenkins has the necessary rights to do this. I would have expected it to fail due to missing permissions.

I am not sure how to "fix" this.

Maybe only allow relative paths? Or never go above workspace directory?

RaiaN avatar Dec 13 '23 17:12 RaiaN

Thanks for pointing out this issue with the s3Download command. I get why this is a big deal, especially with the data loss you faced. However, I've decided not to make changes to the plugin at this point for a couple of reasons:

Avoiding Breaks for Others: Changing how s3Download works could mess up things for other users who haven't had any problems. I need to keep the plugin stable for everyone.

It's More About Jenkins and OS Settings: The problem is really about how Jenkins is set up and the permissions on your system, not just the plugin. It's super important to make sure your Jenkins config and OS permissions are tight and right for what you need.

Better Docs, Not Code Changes: I think the key here is to make sure everyone knows how to use the plugin safely. I'll work on beefing up the docs to highlight these kinds of risks and how to avoid them.

Sorry for any trouble this decision might cause. I'm all for making a safe and useful tool, and I believe better info and careful use is the way to go here.

hoegertn avatar Dec 14 '23 10:12 hoegertn

@hoegertn Thank you for your reply. I agree that permissions must be set up.

However, I believe that force should act as override IF and only IF there is something to override and it actually needs to do that. Currently, if download fails some reason (i.e. nothing to download as in my case) then override will happen anyway!

It seems to me that the S3Download should never behave like "rmdir" or "rm -rf" one as it is pretty dangerous and non explicit (especially when path is / for whatever reason)

If there is a way to put [Deprecated] tag on S3Download and introduce a new API, which handles force option (instead implement override one) more gracefully then it would be a big big improvement.

RaiaN avatar Dec 14 '23 15:12 RaiaN