Build 17.6.0 timed out
CI build for 17.6.0 seems to have timed out and 17.6.0 is not available on dockerhub. Please run the task again.
duplicate of #3036 and meaningless suggestion.
The build time of gitlab itself is getting longer and longer, and will almost certainly reach 1 hour (the upper limit in the free plan). If we don't review the build process itself, we will just waste time running CI.
Apologies, search didn't pick that up, since the specific version is not mentioned in the issue.
Might I suggest a workaround then of building it in another environment and pushing to the dockerhub?
Might I suggest a workaround then of building it in another environment and pushing to the dockerhub?
I'm running a locally built version of 17.6.0, but if someone publishes it on dockerhub it will help people who can't build it themselves.
On the other hand, I'm just a contributor. You should ask the maintainer who has the final words (@sachilles has been the only active one in recent years)
Fair enough. In the meanwhile, if anybody needs this very image built to 17.6.0 up to 17.6.2 I forked it, built them and published on the dockerhub here.
I'm working on a multistage Dockerfile. Using BuildKit (docker buildx build or DOCKER_BUILDKIT=1 docker build), which allows parallel execution of stages I could save around 5 minutes.
@kkimurak Unfortunately, I don't have the option of giving you the role of maintainer.
@sameersbn Do you see a possibility to either change the plan for the CI or take up kkimurak's offer. He has also maintained the project remarkably so far.
@sachilles @kkimurak given that @sameersbn does not seem to be very active on github anymore (At least there have been no contributions in the last year) would moving the ci/cd + docker-image to a new namespace controlled by either of you be a feasible option?
Just for curiosity’s sake: What are the requirements to build the image? I got errors 137 after 2 hours and after increasing memory to 6GB error 134 during the build after 3.5 hours — I found no info in any "building yourself" docu (or was merely looking in the wrong places).
Any insights?
@tDo If necessary, we should do so. I would like to get permission from all maintainers (with respect) if possible, but if we can't contact them, there is nothing we can do.
Stalled project are one of my greatest fears, and the confusion that would result from moving namespaces is a minor pain compared to that.
However, I am still just a contributor (the most active - at least in terms of commits - except for the maintainer) and do not have (nor will I have) any decision-making authority.
So what I'm trying to say is that I'm asking for more human resources with administrative privileges that can respond to emergencies.
@Thomas-Ganter I don't know what the actual requirements are, but I think it's less than 6GB.
Hints for 6GB memory requirement:
Hint 1 : rake task gitlab:assets:compile is executed during build
https://github.com/sameersbn/docker-gitlab/blob/76dad7812405afada9701b81fa1cf714fdaaf021/assets/build/install.sh#L215
Hint 2 : It invokes system command yarn webpack (with some options based on environment variable - to detect CI)
https://gitlab.com/gitlab-org/gitlab/-/blob/f42b800081f2c4c751e8fc59df70857704c1ddee/lib/tasks/gitlab/assets.rake#L120-133
Hint 3 : node command webpack is defined in package.json and it sets --max-old-space-size to 5120
https://gitlab.com/gitlab-org/gitlab/-/blob/f42b800081f2c4c751e8fc59df70857704c1ddee/package.json#L45
Anyway, I usually do local builds on two different virtual machines (Ubuntu) with 8GB / 16GB RAM. I have run out of memory once before on an 8GB machine, but since then I have disabled most services, including the GUI, and do no other work during the build, and have not run out of memory.
@Thomas-Ganter I don't know what the actual requirements are, but I think it's less than 6GB.
Hints for 6GB memory requirement: […]
Thanks @kkimurak — Running the build on a dedicated 8 core / 60GB machine worked for the single-architecture build, the multi-arch build timed out after 2 hrs and 11GB consumed memory in the builder container. Bumped the pipeline timeout to 4hrs and trying again ...
Running a remote build (buildx with kubernetes driver) with 4 CPU, 4 GB memory worked (in 40 minutes)
Am 31.12.2024 um 11:02 schrieb Thomas Ganter:
@Thomas-Ganter <https://github.com/Thomas-Ganter> I don't know what the actual requirements are, but I think it's less than 6GB. Hints for 6GB memory requirement: […]Thanks @kkimurak https://github.com/kkimurak — Running the build on a dedicated 8 core / 60GB machine worked for the single-architecture build, the multi-arch build timed out after 2 hrs and 11GB consumed memory in the builder container. Bumped the pipeline timeout to 4hrs and trying again ...
— Reply to this email directly, view it on GitHub https://github.com/sameersbn/docker-gitlab/issues/3042#issuecomment-2566305605, or unsubscribe https://github.com/notifications/unsubscribe-auth/AVVFCFYVCKD7I4S3Y2WNHC32IJTT5AVCNFSM6AAAAABTK5PYLWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNRWGMYDKNRQGU. You are receiving this because you commented.Message ID: @.***>
Running a remote build (buildx with kubernetes driver) with 4 CPU, 4 GB memory worked (in 40 minutes)
Hmmm … for me a build with a 4hrs timeout just aborted. I now have bumped the timeout to 8hrs, let's see whether this helps.
Then, I also do not know how to properly parallelize multi-architecture builds. Maybe this is my incompetence being the main culprit in my build drama ... any hints welcome.
I've no experience with multi-platform builds, but I'm working on multistage (parallelized build steps), which might help.
Am 01.01.2025 um 02:09 schrieb Thomas Ganter:
Running a remote build (buildx with kubernetes driver) with 4 CPU, 4 GB memory worked (in 40 minutes)Hmmm … for me a build with a 4hrs timeout just aborted. I now have bumped the timeout to 8hrs, let's see whether this helps.
Then, I also do not know how to properly parallelize multi-architecture builds. Maybe this is my incompetence being the main culprit in my build drama ... any hints welcome.
— Reply to this email directly, view it on GitHub https://github.com/sameersbn/docker-gitlab/issues/3042#issuecomment-2566784955, or unsubscribe https://github.com/notifications/unsubscribe-auth/AVVFCF56TOJUJLSRVUHQUBT2IM523AVCNFSM6AAAAABTK5PYLWVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNRWG44DIOJVGU. You are receiving this because you commented.Message ID: @.***>
I've no experience with multi-platform builds, but I'm working on multistage (parallelized build steps), which might help.
Sounds interesting. As the bulk of the time is spent inside the "install.sh", I was also wondering whether this could be split up into separate steps (which would yield in cacheable intermediate layers) which would make it easier to restart builds …
Please see https://github.com/th-2021/docker-gitlab/ multistage branch. Dockerfile.multistage / install2.sh are the new files (kept separate for now)
docker buildx build -o type=registry -f Dockerfile.multistage --build-arg MAX_OLD_SPACE=${MAX_OLD_SPACE} . -t gitlab:17.6.0.1' MAX_OLD_SPACE=8192 works fine
gitlab is coming up, but more testing is needed.
My multi-arch-build also just completed a few minutes ago for the first time, and the memory requirement was hillarious:
And those 2:36 hours were on a 12 core 32 GB VM on an 14th gen Core i9 … and yes, also here, more testing is needed, but it looks good so far.
Are there additional patches needed for arm? I can try it on my setup.
@th-2021 See #2803 - you may be need to edit around golang installation : https://github.com/th-2021/docker-gitlab/blob/616e07f1dc73851c13ebd678c94984b234288fc8/Dockerfile.multistage#L103
Are there additional patches needed for arm? I can try it on my setup.
These are the changes I had to make
- in the
.gitlab-ci.ymlfile:
bump Docker version, introduce buildx, remove the separate push - in the
Dockerfile:
included some scripting to properly set the architecture for later use - in the
install.shscript: included the arch in the golang download URL, bumped the Node Memory, added lots of debugging output (becuase it was super difficult to understand where when what and why things failed).
My repo should be publicly reviewable — if not please ping me and I will remediate.
Once I am sufficiently confident I will update my homelab install, and then let's see … 8^)
But beware … resource demand for a build is higher than a single-platform build:
Update 2025-01-03
in the Dockerfile, additional packages are required:
libmagic1 \
libpixman-1-dev libcairo2-dev libpango1.0-dev libjpeg8-dev libgif-dev \ <---- required on ARM (but doesn't hurt on x86)
&& update-locale LANG=C.UTF-8 LC_MESSAGES=POSIX \
Background: The install.sh script downloads https://github.com/Automattic/node-canvas in binary form, but there only is an x86 release, hence it tries to build it from source, which fails due to the missing pixman-1 dependency. The above libs are listed as build prerequisites ...
I'm sure many here are watching this thread and thinking: Your collective efforts are very much appreciated!! I cannot help with the build process (beyond my pay grade), but if you guys would like some end-user testing when images are available (against existing known to be working non prod instance - aka version upgrade testing) I'm pretty sure I can dedicate some time to testing (only as/when you're ready / needing).
I'm sure many here are watching this thread and thinking: Your collective efforts are very much appreciated!! I cannot help with the build process (beyond my pay grade), but if you guys would like some end-user testing when images are available (against existing known to be working non prod instance - aka version upgrade testing) I'm pretty sure I can dedicate some time to testing (only as/when you're ready / needing).
Hi @fidoedidoe — if you want to test something, my repo should be public at https://gitlab.fami.ga/misc/docker-gitlab. This is currently running the 17.6.2 version I built, but I also have a branch with the 17.7.0 version I I managed to cleanly build after some woes, available in the corresponding Container Registry.
AFAIK all the build logs are also public (correct me if I am wrong), so you should be able to convince yourself that I did not do some sneaky shenanigans. Or you can also clone and build yourself if you want 17.6.2 and it's fixes.
You can find an image for testing the multistage build at https://hub.docker.com/th2021/docker-gitlab The corresponding github repo is at https://github.com/th-2021/docker-gitlab/ multistage branch The image has amd64 and arm64 flavors
I got 404 for https://hub.docker.com/th2021/docker-gitlab correction: https://hub.docker.com/r/th2021/docker-gitlab
I got 404 for https://hub.docker.com/th2021/docker-gitlab correction: https://hub.docker.com/r/th2021/docker-gitlab
@th-2021 - I also get the 404 for the provided docker image url, but this one seems to work: https://hub.docker.com/r/th2021/docker-gitlab (for viewing in a web browser), i'll download (docker pull th2021/docker-gitlab:17.6.2-d7225209) and will start running some tests (most likely will start tomorrow - Friday 10th Jan)
EDIT 1: Docker Image size
My first observation based on docker image pull, is it's a lot bigger. Can I ask - were the docker layers squashed before pushing the image to docker hub (perhaps that's a next step once the image has been validated)?
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
th2021/docker-gitlab 17.6.2-d7225209 363da06c8892 21 hours ago 6.09GB
sameersbn/gitlab 17.5.1 2e315f7ef257 2 months ago 4.08GB
Edit 1.1 - Docker Image Squashed
Using docker-squash.sh (link), I'm not promoting its use - just using it to illustrate the difference on the original docker image when compared to when it's squashed, the image ("th2021/docker-gitlab:17.6.2-squashed") is reduced significantly (and much more aligned with the sameersbn docker image size, as shown above)
$ docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
th2021/docker-gitlab 17.6.2-squashed 78fc6353932e 52 seconds ago 4.17GB
th2021/docker-gitlab 17.6.2-d7225209 363da06c8892 38 hours ago 6.09GB
EDIT 2: Upgrade successful & Web UI Running
okay an upgrade from sameersbn/gitlab:17.5.1 -> th2021/docker-gitlab:17.6.2-d7225209 on face value has worked - I can log in to GitLab Web UI as admin (detail below pulled from web UI / admin interface). I'll start testing git functions and GitLab UI tomorrow and will add findings/feedback to this post
GitLab version[v17.6.2](https://gitlab.com/gitlab-org/gitlab-foss/-/tags/v17.6.2)
GitLab Shell14.39.0
GitLab Workhorsev17.6.2
GitLab APIv4
Ruby3.2.6p234
Rails7.0.8.4
EDIT 3: New/changed Env Vars
Running docker inspect <container> I see some differences in the published env vars in the container (I'm comparing 17.5.1 to 17.6.2), these are listed below (I'm not sure if these are leftovers from your image build or new env vars introduced to Gitlab 17.6.x ?):
- "PATH=" additionally now contains: "/tmp/go/bin;"
- "GITLAB_CLONE_URL=....." (new when compared to 17.5.1)
- "GITLAB_SHELL_URL=......" (new when compared to 17.5.1)
- "GITLAB_PAGES_URL=....." (new when compared to 17.5.1)
- "GITLAB_GITALY_URL=....." (new when compared to 17.5.1)
- "GITLAB_WORKHORSE_BUILD_DIR=....." (new when compared to 17.5.1)
- "GITLAB_PAGES_BUILD_DIR=....." (new when compared to 17.5.1)
- "GEM_CACHE_DIR=....." (new when compared to 17.5.1)
- "RUBY_SRC_URL=....." (new when compared to 17.5.1)
- "GOROOT=....." (new when compared to 17.5.1)
EDIT 4: Git actions (clone, commit, push)
Basic clone, modify, commit & push is successful & all changes are correctly reflected in web UI too.
The image should be th2021/docker-gitlabAm 09.01.2025 18:23 schrieb Gavin Fowler @.***>:
I got 404 for https://hub.docker.com/th2021/docker-gitlab correction: https://hub.docker.com/r/th2021/docker-gitlab
@th-2021 - I also get the 404 for the provided docker image url
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: @.***>
First, thank you for maintaining this project over the years! I've been using sameersbn/docker-gitlab for quite some time now and have been very happy with its flexibility and modularity. However, I recently started revisiting my deployment choices and noticed that the official GitLab Docker image has come a long way since I first chose this project. I’d like to ask the community and maintainers for some guidance:
What are the key advantages of continuing to use sameersbn/docker-gitlab today?
Does this project still offer unique benefits that make it a better choice for long-term deployments? Are there scenarios where this setup outperforms the official GitLab image in terms of flexibility, resource usage, or integrations? For example, are there significant benefits in terms of updates, security, or feature parity that the official image provides?
I appreciate the hard work that has gone into this project and understand it has been a great solution for many users, myself included. I’d love to hear thoughts from the maintainers and community on how sameersbn/docker-gitlab fits into the current GitLab Docker landscape and whether sticking with this setup remains a good long-term strategy.
The offiziellen image doesn't support relative url, e.g. /gitlab
Am 9. Januar 2025 19:18:23 MEZ schrieb suchwerk @.***>:
First, thank you for maintaining this project over the years! I've been using sameersbn/docker-gitlab for quite some time now and have been very happy with its flexibility and modularity. However, I recently started revisiting my deployment choices and noticed that the official GitLab Docker image has come a long way since I first chose this project. I’d like to ask the community and maintainers for some guidance:
What are the key advantages of continuing to use sameersbn/docker-gitlab today?
Does this project still offer unique benefits that make it a better choice for long-term deployments? Are there scenarios where this setup outperforms the official GitLab image in terms of flexibility, resource usage, or integrations? For example, are there significant benefits in terms of updates, security, or feature parity that the official image provides?
I appreciate the hard work that has gone into this project and understand it has been a great solution for many users, myself included. I’d love to hear thoughts from the maintainers and community on how sameersbn/docker-gitlab fits into the current GitLab Docker landscape and whether sticking with this setup remains a good long-term strategy.
-- Reply to this email directly or view it on GitHub: https://github.com/sameersbn/docker-gitlab/issues/3042#issuecomment-2580972432 You are receiving this because you were mentioned.
Message ID: @.***>
@fidoedidoe The image might be bigger because some caches and other intermediate files are not deleted. The new ENV/ARG variables are moved from install.sh to Dockerfile.multistage. So yes they are new wrt docker.
@fidoedidoe The image might be bigger because some caches and other intermediate files are not deleted. The new ENV/ARG variables are moved from install.sh to Dockerfile.multistage. So yes they are new wrt docker.
take a look at the effect `docker-squash.sh" has on the image size (see EDIT 1.1 in my expanded post above, link)