AL-Go icon indicating copy to clipboard operation
AL-Go copied to clipboard

Improve build performance

Open freddydk opened this issue 2 years ago • 10 comments

Use GitHub Packages for docker images for CI/CD

  • Determine artifactUrl as part of initialize
  • Do not download artifacts in RunPipeline when using an image
  • Store image in public docker ghcr (free and accessible from local PC, ACI's and Azure VMs as well)
  • Build and push in CI/CD?
  • Have scheduled jobs to maintain images in use?
  • Support for ghcr images (even with self-hosted agents)
  • Do not use docker images for current, next minor and next major (to avoid compromising secrets)

Use GitHub Actions Cache for improved build performance with hosted agents

  • Cache Business Central artifacts in actions cache

Build jobs, which are set to not upgrade and not run tests can be done very quickly with a filesonly cached container image

  • Very quickly build jobs

When using UseProjectDependencies, we could allow all build jobs of dependent apps to build without upgrading and running tests

  • Then only when building the highest level, we upgrade all apps and run all tests while all apps are installed.

Some additional info:

  • https://github.blog/2020-09-01-introducing-github-container-registry/
  • https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry
  • https://docs.docker.com/build/building/cache/backends/gha/
  • https://docs.github.com/en/actions/using-workflows/caching-dependencies-to-speed-up-workflows
  • https://github.com/actions/cache

freddydk avatar Nov 06 '22 14:11 freddydk

I've considered this feature as well.

And I came to the conclusion it would be easiest to use the github cache action to cache the docker image as a tar.

The only limitation is that you can only cache one image per repo. Because the size limitation of 10GB per repo doesn't allow more then one image. But I think this is manageable. You only cache the image created in the main branch. The cache is then accessible in all branches that are based on main. Which should speed up the whole CI/CD workflow significantly without any additional services to configure. And as a restore-key you can use the full artifact url.

Or am I missing something? I didn't had time to test my solution.

jonaswre avatar Nov 08 '22 07:11 jonaswre

@jonaswre - I tested that - that is way too slow. GitHub packages supports images (as a docker image registry) - and this is way faster to download from here to the github hosted runners - this registry can then also be used for local docker containers, ACIs or Azure VMs - so this is the route I will investigate further. Possible create some workflows, which maintains the images you use somehow. More investigation is needed, but this seems promising - I could easily cut 5 minutes from a hosted run.

freddydk avatar Nov 08 '22 14:11 freddydk

BTW - the TAR is 14Gb

freddydk avatar Nov 08 '22 14:11 freddydk

In my previous company, we spend quite some time finetuning performance/cost ratio of our build agents.

We used Azure Devops there, which has capability of Azure VM scale sets. AzureDevops controls the scale set and creates or destroy agent VM. VM was based on image (ephemeral) I generate. Within this image, I would rep-download/cache all the needed stuff: agent/bccontainerhelper/7zip/base image/& most used bc artifacts. I would refresh base image on regular basis. Image size was a challenge there . Smaller image you use, faster VM is created. So, you need to be smart what to cache & what not.

We configured VM scale set with min=0, max=15, and cleanup time 30min. This allowed us to use good machines and still save money. Plus, it highly improved waiting time (previously we used multiple machines that run 24/7). If I remember correctly, it tpok ~5 minutes to source an agent.

We also tried using caching (similar to GitHub Actions Cache) mechanism, but it was way too slow as well. Ironically, it's faster to spin entire VM :)

I believe similar approach could be used with Github as well. They do provide few ways for autosacalling. VM on AWS, and kubernetes. Just an idea.

When it comes to GitHubPackages, they also come with a limits. Not sure if those applies for MS. Nevertheless, those docker images are not small. I think it would be worth to investigate it first.

gntpet avatar Nov 08 '22 16:11 gntpet

Thanks, I do believe we need to do multiple things for improving perf and I will definitely take all suggestions into account.

freddydk avatar Nov 08 '22 16:11 freddydk

btw - on autoscale - I want to wait for this: https://github.com/github/roadmap/issues/555 to see if that is something that becomes easily usable by AL-Go

freddydk avatar Nov 08 '22 16:11 freddydk

@jonaswre - I tested that - that is way too slow. GitHub packages supports images (as a docker image registry) - and this is way faster to download from here to the github hosted runners - this registry can then also be used for local docker containers, ACIs or Azure VMs - so this is the route I will investigate further. Possible create some workflows, which maintains the images you use somehow. More investigation is needed, but this seems promising - I could easily cut 5 minutes from a hosted run.

@freddydk should have expected that you had already tried this. Great news, I wont waste any time on this. Interesting that pulling from the registry is faster then from the cache.

BTW - the TAR is 14Gb

Okay... Then I don't now what I was looking at I didn't tar it, I thought It was more like 8.

jonaswre avatar Nov 08 '22 22:11 jonaswre

Just curious if you making/planning any progress in this area? Asking, since we are starting to feel pain due to slow build pipelines. Wondering, if i should start fine tuning it, or some great improvements are just around the corner?

gntpet avatar Sep 29 '23 14:09 gntpet

Which of these steps have been realised at this point? We want to take some steps to speed up our Builds - especially since we mostly have PTE applications with a given BC version, so there isn't really a good reason to rebuild the image.

cegekaJG avatar Apr 15 '24 11:04 cegekaJG

I'm experiencing a lot of random timeout. Some days, downloading artifacts will go nicely, some others, I will face many time out and failed build... I've stopped digging on whether it could be due to the bccontainerhelper version or not since it's really really random.. Yesterday I've used the dev version and it worked, this morning I faced many timeout and switch back to the release, and it's working

The DetermineArtifactUrl always takes more than 5 mns. The Build time takes between 30mns and 3 hours, depending on how fast I will be able to download the artifact. (I have a 300Mbps fiber connection)

ronnykwon avatar May 16 '24 18:05 ronnykwon