pipeline-aws-plugin
pipeline-aws-plugin copied to clipboard
s3Upload() doesn't work if `includePathPattern` matches multiple files
Version 1.26
Assuming we have the following content in the build
directory:
Houseparty-arm64-v8a.apk
Houseparty-armeabi-v7a.apk
Houseparty-x86.apk
mapping.txt
And we want to upload it to S3:
This works:
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*-arm64-v8a.apk', workingDir: 'build')
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*-armeabi-v7a.apk', workingDir: 'build')
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*-x86.apk', workingDir: 'build')
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*.txt', workingDir: 'build')
This doesn't work and only uploads mapping.txt
:
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*.apk', workingDir: 'build')
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*.txt', workingDir: 'build')
This doesn't work either and doesn't upload anything:
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*', workingDir: 'build')
can you try again with 1.27
Version 1.27
I have the build directory:
Houseparty-arm64-v8a.apk
Houseparty-armeabi-v7a.apk
Houseparty-x86.apk
mapping.txt
I want to upload it to S3:
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*.apk', workingDir: 'build')
s3Upload(bucket: 'hp-client-builds', path: "jobs/$JOB_NAME/$BUILD_NUMBER/", includePathPattern: '*.txt', workingDir: 'build')
It only uploads mapping.txt
. Still unresolved in 1.27
This issue is still unresolved in 1.31
Can you please describe your setup? I cannot reproduce the problem.
My pipeline looks similar to this:
dir('build') {
withEnv(['GIT_SSH=run_ssh.sh']) {
sh """
./make-package
// This produces ${BUILD_PARENT}.tar.gz
// This produces ${BUILD_PARENT}.tar.gz.sha1
"""
}
}
withAWS(credentials: 'dash-build-s3upload', region: 'us-west-1') {
s3Upload bucket: 's3bucketname', includePathPattern: "*tar*", workingDir: 'build'
}
I can confirm that the tar files are in the build directory.
And the following is what i'm actually using:
s3Upload bucket: 's3bucketname', file: "${BUILD_PARENT}.tar.gz", workingDir: 'build'
s3Upload bucket: 's3bucketname', file: "${BUILD_PARENT}.tar.gz.sha1", workingDir: 'build'
Seems it's known bug but still waiting for taking into consideration: https://issues.jenkins-ci.org/browse/JENKINS-47046
One of the workaround would be usage of pipeline findFiles: https://jenkins.io/doc/pipeline/steps/pipeline-utility-steps/#findfiles-find-files-in-the-workspace
The problem is that I cannot reproduce it on my test setup to debug the root cause.
The problem is that I cannot reproduce it on my test setup to debug the root cause.
Is there an online playground for jenkins pipeline, or some other ways how to share the whole build job? Because the setup that is failing for me is literally the official example:
s3Upload(bucket:"my-bucket", path:'path/to/targetFolder/', includePathPattern:'**/*', workingDir:'dist', excludePathPattern:'**/*.svg')
I am also seeing this error :/
Same here.
This works:
pipeline{
agent { node { label 'jenkins-host' } }
stages {
stage('test') {
steps {
script {
sh "rm -rf txt_dir || true"
sh "echo test1 >> test1.txt"
// sh "echo test2 >> test2.txt"
sh "mkdir -p txt_dir"
sh "mv *txt txt_dir"
archiveArtifacts allowEmptyArchive: true,
artifacts: "**/*txt",
caseSensitive: false,
defaultExcludes: false,
onlyIfSuccessful: false
withAWS(endpointUrl:'http://100.64.0.165:9000', // local minio.io
credentials:'128e57fa-140a-4463-ad37-b3821371f735') {
s3Upload bucket:'jenkins', path:"build-${env.BUILD_NUMBER}/",
includePathPattern:'**/*txt', workingDir: "${env.WORKSPACE}"
}
}
}
}
}
}
Take out the comment on sh "echo test2 >> test2.txt"
and it doesn't work.
It also doesn't say it's failed. Just "Upload complete".
Is there something I can do on my end to allow debugging/log files/etc.?
Having the same problem. Trying to upload the contents of an entire folder to the root of the S3 bucket, where the files list is something like:
ls -l assets/marketing/
-rw-rw-r--. 1 jenkins jenkins 85598 Jan 11 16:52 ai_logo.png
-rw-rw-r--. 1 jenkins jenkins 1559 Jan 11 16:52 favicon-16x16.png
-rw-rw-r--. 1 jenkins jenkins 2366 Jan 11 16:52 favicon-32x32.png
-rw-rw-r--. 1 jenkins jenkins 1150 Jan 11 16:52 favicon.ico
-rw-rw-r--. 1 jenkins jenkins 180092 Jan 11 16:52 header.jpg
-rw-rw-r--. 1 jenkins jenkins 3635 Jan 15 13:19 index.html
-rw-rw-r--. 1 jenkins jenkins 15173 Jan 11 16:52 logo.png
-rw-rw-r--. 1 jenkins jenkins 268 Jan 15 10:48 README.md
-rw-rw-r--. 1 jenkins jenkins 487 Jan 11 17:35 ribbon.css
-rw-rw-r--. 1 jenkins jenkins 1825 Jan 11 16:52 style.css
The following will fail to upload anything:
stage('deploy') {
when {
branch 'master'
}
steps {
withAWS(endpointUrl:'https://s3.amazonaws.com', credentials:'aws_cred_id'){
s3Upload(bucket:'staticwebsite-bucket', path:'', includePathPattern: '*', workingDir: 'assets/website/', acl:'PublicRead')
}
}
}
And this version will upload only one of each file type:
stage('deploy') {
when {
branch 'master'
}
steps {
withAWS(endpointUrl:'https://s3.amazonaws.com', credentials:'aws_cred_id'){
s3Upload(bucket:'staticwebsite-bucket'', path:'', includePathPattern: '*.css', workingDir: 'assets/website/', acl:'PublicRead')
s3Upload(bucket:'staticwebsite-bucket'', path:'', includePathPattern: '*.png', workingDir: 'assets/website/', acl:'PublicRead')
s3Upload(bucket:'staticwebsite-bucket'', path:'', includePathPattern: '*.jpg', workingDir: 'assets/website/', acl:'PublicRead')
s3Upload(bucket:'staticwebsite-bucket'', path:'', includePathPattern: '*.ico', workingDir: 'assets/website/', acl:'PublicRead')
}
}
}
The only useful work-around while still using withAWS and s3Upload is currently by using a findFiles glob and looping through the resulting list. It works fine, and will make it easy to convert a Jenkinsfile over when s3Upload gets fixed, but here is an example for anyone else:
stage('deploy') {
when {
branch 'master'
}
steps {
script {
FILES = findFiles(glob: 'assets/website/**')
withAWS(endpointUrl:'https://s3.amazonaws.com', credentials:'aws_cred_id'){
FILES.each{ item ->
s3Upload(bucket: 'staticwebsite-bucket', acl: 'PublicRead', path: '', file: "${item.path}")
}
}
}
}
}
The only useful work-around while still using withAWS and s3Upload is currently by using a findFiles glob and looping through the resulting list. It works fine, and will make it easy to convert a Jenkinsfile over when s3Upload gets fixed, but here is an example for anyone else:
stage('deploy') { when { branch 'master' } steps { script { FILES = findFiles(glob: 'assets/website/**') withAWS(endpointUrl:'https://s3.amazonaws.com', credentials:'aws_cred_id'){ FILES.each{ item -> s3Upload(bucket: 'staticwebsite-bucket', acl: 'PublicRead', path: '', file: "${item.path}") } } } } }
It does not keep the relative path, it will upload files in every sub folders into the root of the bucket. Is there an easy way to get the relative path. Really hope this ticket will be fixed soon.
Hey @weidonglian I haven't tried it but findFiles glob includes
${item.directory}
As a value. I would check that out (you may have to remove $PWD) and then put the value in for path:
I know it is a awful work-around, but it is the only thing I've been able to make work so far.
I believe I may be able to clarify the issue a bit here. The problem appears to arise on Windows agents, but not on *nix agents.
Example pipeline
pipeline {
agent { label 'master' }
stages {
stage('Generic files')
{
steps {
dir('test') {
writeFile file: 'test.csv', text: 'fake csv file for testing'
writeFile file: 'test.log', text: 'fake log file for testing'
dir ('results') {
writeFile file: 'test.csv', text: 'fake csv file within results directory'
writeFile file: 'test.log', text: 'fake log file within results directory'
}
}
}
}
}
post {
always {
withAWS(credentials: 'MY_CREDENTIALS', region: 'MY_REGION') {
s3Upload(bucket: "test-bucket", includePathPattern: "test/results/*", path: "test-dir/")
s3Upload(bucket: "test-bucket", includePathPattern: "test/results/test*", path: "test-dir/")
s3Upload(bucket: "test-bucket", includePathPattern: "test/test*", path: "test-dir/")
}
}
}
When running on a *nix agent, the commands all properly upload files.
When running on a windows agent (e.g. change master
agent to a windows-specific agent), none of the s3Upload commands will upload a file.
Can you try what happens if you use \
as path separator?
This bug does happen on Linux. On Thu, Feb 21, 2019 at 9:35 AM Thorsten Hoeger [email protected] wrote:
Can you try what happens if you use \ as path separator?
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/jenkinsci/pipeline-aws-plugin/issues/83#issuecomment-466092285, or mute the thread https://github.com/notifications/unsubscribe-auth/AAmUy8p6KaRr-zQGLseuyBDnWcSrg-buks5vPtjKgaJpZM4Tu66k .
Using \\
as the path separator in the pipeline does not make the problem go away on a Windows agent. I can reliably upload using the above pipeline on Linux, but it will fail every time from a windows agent.
A potential work-around for this issue (if both windows-like and unix-like nodes are available) is to stash files on the Windows node and unstash them to a *nix node before the s3Upload command is executed.
@ryan-summers it totally happens on *nix agents as well. In fact, I didn't even know you could use a windows agent.
Perhaps the windows vs. *nix agents upload is a different issue then.
Any progress on this? Running into same issue on *nix agents as well.
Seems to be an issue only on slaves. If you run the exact same pipeline on the master, it appears to work. See: https://issues.jenkins-ci.org/browse/JENKINS-44000
I was having this exact same problem, building the plugin from source resolved my issue (docker slaves).
This works fine in slave too..
Example Pipeline stage
stage('Deployment') { steps { script { def files = findFiles(glob: 'build/.') withAWS(region:'us-east-1',credentials:'AutoDeployer') { files.each {s3Upload(file:"${it}", bucket:'mymnr.dev', path:"",pathStyleAccessEnabled:true, payloadSigningEnabled:true, acl:'PublicRead')} }
files = findFiles(glob: 'build/static/css/*.*')
withAWS(region:'us-east-1',credentials:'AutoDeployer')
{
files.each {s3Upload(file:"${it}", bucket:'mymnr.dev', path:"static/css/",pathStyleAccessEnabled:true, payloadSigningEnabled:true, acl:'PublicRead')}
}
files = findFiles(glob: 'build/static/js/**')
withAWS(region:'us-east-1',credentials:'AutoDeployer')
{
files.each {s3Upload(file:"${it}", bucket:'mymnr.dev', path:"static/js/",pathStyleAccessEnabled:true, payloadSigningEnabled:true, acl:'PublicRead')}
}
}
}
}
Hi, is there any progress on this or has it been resolved? I am on v1.36, using it to separate regular vs gz files. perhaps I am using it the wrong way also. Help appreciated.
stage('Deploy') {
when {
branch "master"
}
steps {
s3Upload(
bucket: "${S3_BUCKET}",
path: "${S3_PATH}",
workingDir: "dist",
includePathPattern: "**/*",
excludePathPattern: "**/*.gz",
acl: "PublicRead"
)
s3Upload(
bucket: "${S3_BUCKET}",
path: "${S3_PATH}",
workingDir: "dist",
includePathPattern: "**/*.gz",
contentEncoding: "gzip",
acl: "PublicRead"
)
}
}