MINGW-packages icon indicating copy to clipboard operation
MINGW-packages copied to clipboard

add workflow to build all llvm packages

Open jeremyd2019 opened this issue 3 years ago • 8 comments

Updating all llvm packages in one go would cause the normal CI job to exceed the 6 hour limit on jobs on hosted runners. This splits the build job from main.yml into a reusable workflow in build.yml, changing main.yml to call this, and adding a new llvm.yml that calls build.yml multiple times to build packages in separate jobs, which each have their own 6 hour limit.

Thoughts:

  1. Do we even want this in the repo?
  2. Right now the llvm.yml is only on workflow_dispatch. Should it be on push/pull_request with a path filter so it runs when one of the llvm packages is updated? I read in one of the github docs that a workflow that is not run due to a path filter shows a check as always 'pending', if that's true that could be annoying for most pull requests (that don't touch llvm packages).
  3. Names...

This worked to build the current 15.0.7 packages but failed to build the 16.0.0rc3 packages from #16032 due to libclc failing to build.

partial diff of old main.yml to new build.yml from commit that split it
diff --git a/main.yml.orig b/build.yml
index 47d2dcbc8..ec697230d 100644
--- a/main.yml.orig
+++ b/build.yml
@@ -1,16 +1,14 @@
-name: main
-
-concurrency:
-  group: ${{ github.ref }}
-  cancel-in-progress: true
+name: build
 
 on:
-  push:
-  pull_request:
+  workflow_call:
+    inputs:
+      packages:
+        required: false
+        type: string
 
 jobs:
   build:
-    if: ${{ github.event_name != 'push' || github.ref != 'refs/heads/master'}}
     strategy:
       fail-fast: false
       matrix:
@@ -75,16 +73,31 @@ jobs:
           cp /etc/pacman.conf /etc/pacman.conf.bak
           grep -qFx '[staging]' /etc/pacman.conf || sed -i '/^# \[staging\]/,/^$/ s|^# ||g' /etc/pacman.conf
 
-      - name: Update using staging
-        run: |
-          msys2 -c 'pacman --noconfirm -Suuy'
-          msys2 -c 'pacman --noconfirm -Suu'
-
       - name: Move Checkout
         run: |
           If (Test-Path "C:\_") { rm -r -fo "C:\_" }
           Copy-Item -Path ".\temp" -Destination "C:\_" -Recurse
 
+      - uses: actions/download-artifact@v3
+        id: artifacts
+        continue-on-error: true
+        with:
+          name: ${{ matrix.msystem }}-packages
+          path: C:/_/artifacts
+
+      - name: Add artifacts repo
+        if: steps.artifacts.outcome == 'success'
+        shell: msys2 {0}
+        run: |
+          sed -i '1s|^|[artifacts]\nServer = file:///c/_/artifacts/\nSigLevel = Never\n|' /etc/pacman.conf
+          shopt -s nullglob
+          repo-add /c/_/artifacts/artifacts.db.tar.gz /c/_/artifacts/*.pkg.tar.*
+
+      - name: Update using staging
+        run: |
+          msys2 -c 'pacman --noconfirm -Suuy'
+          msys2 -c 'pacman --noconfirm -Suu'
+
       - name: CI-Build
         shell: msys2 {0}
         id: build
@@ -92,7 +105,7 @@ jobs:
           cd /C/_
           unset VCPKG_ROOT
           pacman -S --needed --noconfirm ${MINGW_PACKAGE_PREFIX}-ntldd
-          MINGW_ARCH=${{ matrix.msystem }} ./.ci/ci-build.sh
+          MINGW_ARCH=${{ matrix.msystem }} ./.ci/ci-build.sh ${{ inputs.packages }}
 
       - name: "Upload binaries"
         if: ${{ !cancelled() }}

...

jeremyd2019 avatar Mar 08 '23 18:03 jeremyd2019

I have a different idea for not only LLVM but all packages. building packages separately, each in its own job.

  • first job, where the workfow parse modified PKGBUILDs, build a tree of packages, and output a matrix of jobs (like msys-autobuild) with dependencies (which should be in needs:), but we need something (maybe a label) to tell CI which package should be built first when there is a cycle-dependency.
  • then build the required packages separately.

MehdiChinoune avatar Mar 09 '23 06:03 MehdiChinoune

  • then build the required packages separately.

Do you mean that each package should be built in its separate job? Wouldn't that result in a lot of overhead (e.g., running setup-msys2 for each package)? Especially if packages are small.

mmuetzel avatar Mar 10 '23 14:03 mmuetzel

Do you mean that each package should be built in its separate job?

Yes.

Wouldn't that result in a lot of overhead (e.g., running setup-msys2 for each package)? Especially if packages are small.

That's a point!

MehdiChinoune avatar Mar 10 '23 14:03 MehdiChinoune

  • first job, where the workfow parse modified PKGBUILDs, build a tree of packages, and output a matrix of jobs (like msys-autobuild) with dependencies (which should be in needs:)

I have thought about this, but I think it is not possible to specify a specific matrix job in needs, just the matrix as a whole (unless they changed something since the last time I tried). Also, I don't know if the matrix context is available in the needs field (not all contexts are available everywhere, which can be a bit annoying).

jeremyd2019 avatar Mar 13 '23 18:03 jeremyd2019

I rebased this today, and did a run to make sure it still works: https://github.com/jeremyd2019/MINGW-packages/actions/runs/5203651052

jeremyd2019 avatar Jun 08 '23 04:06 jeremyd2019

I did a run updating to 16.0.5: https://github.com/jeremyd2019/MINGW-packages/actions/runs/5216757927

I'm kind of second guessing removing libclc from the workflow, in normal point release updates it's OK to build, it was just updating to a new major version it could not build until some other packages got updated (spirv-llvm-translator and some of its dependencies). Maybe I should add a boolean input to the workflow as to whether or not to include it?

jeremyd2019 avatar Jun 09 '23 16:06 jeremyd2019

I was thinking about something like this again the other day. It seems like splitting the "meat" of the build into a reusable workflow would be handy. Would there be any interest in my doing that, and then consider creating composite workflows later? (Or possibly even in different repo(s), which would be interesting too)

jeremyd2019 avatar Apr 11 '24 19:04 jeremyd2019

Turned out what I wanted to do this time was "fast" enough to not hit the 6 hour timeout.

jeremyd2019 avatar Apr 17 '24 18:04 jeremyd2019