[Enhancement]: BC image for Github Enterprise
Feature description
Github enterprise does support custom/partner images for the large runners
Would it be possible that to create one with all the BC stuff needed for Al-Go:
- latest bccontainerhelper
- latest sandbox artifacts per country
- latest onprem artifacts per country
- latest generic nav-docker image
- ...
having those tools preinstalled/predownloaded would greatly help to reduce time needed to spin BC container. Yes we still do it because we want to test the code.
See more info about runner-images and their github repo
Best Regards, Gintautas
I like the idea, the problem is just that sandbox artifacts change very very frequently, so having the latest pre-downloaded would mean that we would have to re-do the images many times a day, which is not going to happen. The generic image is updated once a month and it would certainly be possible to re-build an azure VM image with this every time. Currently, there are a few things, which takes time when using GitHub hosted runners
- Downloading the generic image takes ~100 seconds
- I was actually hoping that the teleport feature (https://github.com/Azure/acr/blob/main/docs/teleport/README.md) would solve this problem for us. I was in communication with the team in December 2022, but it seems like they went a different route and they still do not support Windows Images :-(
- Determining artifact url
- If people are using latest artifacts with a specific country, determining artifact url takes a long time (~60 seconds) because of the way we query the artifacts. If people use artifact setting like f.ex. '//24.0//first', it takes only a few seconds.
- Downloading the used artifact
- This is a killer - takes 4-5 minutes. If people are using latest artifacts, we cannot get this right as they change very frequently. If people are using a specific artifact, we really cannot pre-download all these artifacts to pre-build images anyway. A better mechanism could be to utilize GitHub cache (like we do the compiler folder) - then downloading artifacts will go down to a few seconds.
- Creating a container or container image
- Is the second killer - around 6 minutes. Having pre-built docker images for all artifacts isn't possible for the same reason as above and caching docker images also isn't possible unless you have a self-hosted runner.
- Downloading BcContainerHelper
- Takes ~30 seconds and we could definitely cache the latest version of ContainerHelper, but we would still have to import it. We couldn't pre-install it. The downloading is only ~5 seconds.
So creating an image like this in a generic way would probably only save us 100 seconds (which still is a lot). We will investigate more...
- Downloading the generic image takes ~100 seconds
For us it takes even longer
- Determining artifact url
will give it a go. I assume it gives the same result
- Downloading the used artifact
I second that. it's quite frequent that CDN fails to download it. We often see big time difference between two projects. (no containers, no tests, just simple compilation with different versions).
One downloads from CDN quickly, second chokes.
I agree that it is hard to pre-cache everything. But, perhaps you can see very clear patterns on your storage statistics. For e.g. we are compiling using sandbox artifacts for 24.0. It's latest version does not change that frequent anymore.
We actually have a meeting tomorrow where we need to discuss the future of artifacts storage - hopefully we can solve this problem and include the performance problem
On the artifacts storage, we will probably shift to use OCI artifacts and also refactor the artifacts to a different layer structure to better match how they are going to be used. Timeline is still unknown
@freddydk, here's another improvement from github, that could help with performance. It's shipped as private preview, which i guess you can get access to.
https://github.com/github/roadmap/issues/826