upload-artifact
upload-artifact copied to clipboard
[feat req] Ability to upload multiple artifacts
What would you like to be added?
When building a go application for multiple platforms, I would like to upload the built artifacts as separate files, like this:
- app_darwin_amd64
- app_freebsd_amd64
- app_linux_amd64
- app_windows_amd64.exe
Why is this needed?
Right now I have to duplicate the following action call for each binary, and that's not convenient at all.
- name: Upload built artifacts
uses: actions/upload-artifact@v3
with:
name: app_darwin_amd64
path: ./app_darwin_amd64
Keeping all binaries in a single zip file is also not convenient because it requires downloading all binaries at once, while I need only a single one
Doesn't this work for you?
- name: Upload built artifacts
uses: actions/upload-artifact@v3
with:
name: app_darwin_amd64
path: |
./app_darwin_amd64
./app_freebsd_amd64
@retorquere that will create a single archive with these binaries, right? But I'd like to have separate archives for each binary
We are having a similar problem, just with even more artifacts.
Seems this PR has the change to do it, but you can see progress has been stalled for awhile. You could fork the changes and fix the merge issue.
https://github.com/actions/upload-artifact/pull/205
#354
Hey Guys, Any updates on this?
I support, we need such a function.
Simply do
- name: Upload Test Results
uses: actions/upload-artifact@5d5d22a31266ced268874388b861e4b58bb5c2f3 # v4.3.1
with:
name: a-test-reports
path: |
./playwright-report-a/index.html
- name: Upload Test Results
uses: actions/upload-artifact@5d5d22a31266ced268874388b861e4b58bb5c2f3 # v4.3.1
with:
name: b-test-reports
path: |
./playwright-report-b/index.html
I have half a dozen artifacts that I'm dumping into a new directory, and the list may expand as I build binaries for new platforms. It would be nice if I could upload all of them as individual artifacts without having to repeat the action six or more times and manually maintain the list to match the number of files.
I have half a dozen artifacts that I'm dumping into a new directory, and the list may expand as I build binaries for new platforms. It would be nice if I could upload all of them as individual artifacts without having to repeat the action six or more times and manually maintain the list to match the number of files.
doable with some support of bash script and regex I think
doable with some support of bash script and regex I think
You can invoke actions from bash?
doable with some support of bash script and regex I think
You can invoke actions from bash?
the other way around, call a customized bash script to reconstruct whatever files/artifacts you need, then output the names and pass to the following jobs.
My goal is to publish the artifacts for humans to download individually, not to consume in a later job. On release these files are published as release assets, but I'd like them to be temporarily available for all builds not just releases.
Your question prompted me to consider why using the action is required at all. Since actions don't have special access to anything in the job runner, it should be possible to recreate the steps used by actions to upload artifacts to azure blob storage manually in principle, perhaps using az.
What follows is a trace of how upload-artifact actually works:
action.ymlconfiguresmain: 'dist/upload/index.js', which I'm pretty sure is compiled fromupload-artifact.jsupload-artifact.ts:runcallsshared/upload-artifact.ts:uploadArtifact, which imports the @actions/artifact npm library@actions/artifactexportsuploadArtifactwhich does the actual upload- uploadArtifact gets an ArtifactHttpClient, then wraps the file in a streaming zip encoder and calls
uploadZipToBlobStorage uploadZipToBlobStorageuses the azure blob storage api create a blob with anauthenticatedUploadURLand stream it out.
There are some setup and cleanup functions in there which are probably required to get it to work, but I don't see why this isn't possible to replicate in shell. -50MB of javascript to upload a file? 👍
I got bored before I got any farther, but I guess the next step would be to figure out exactly how it gets a signed azure blob upload url.
Looks like it's possible, please see the discussion