Upgrade Buildroot to 2025.02.8 LTS for GCC 13+ compatibility
Upgrade Buildroot to 2025.08.2 for GCC 13+ compatibility
Fixes #21967 Fixes #20993
Changes:
- Upgrade BUILDROOT_BRANCH from 2025.02 to 2025.02.8 LTS
The newer Buildroot version includes updated package versions that are compatible with modern GCC 13+ toolchains, eliminating build failures on recent Linux distributions like Fedora 39+.
Tested on Fedora 43 with GCC 13.3.1.
Hi @vtri950. Thanks for your PR.
I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.
Once the patch is verified, the new status will be reflected by the ok-to-test label.
I understand the commands that are listed here.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Can one of the admins verify this patch?
/ok-to-build-iso
failed iso build logs: https://storage.cloud.google.com/minikube-builds/logs/21997/0cbf9da/iso_build.txt
Build failed with:
+ git remote add vtri950 [email protected]:vtri950/minikube.git
+ git fetch vtri950
ERROR: Repository not found.
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Build step 'Execute shell' marked build as failure
@medyagh any clue why the build tries to add this remote?
failed iso build logs: https://storage.cloud.google.com/minikube-builds/logs/21997/0cbf9da/iso_build.txt
Build failed with:
+ git remote add vtri950 [email protected]:vtri950/minikube.git + git fetch vtri950 ERROR: Repository not found. fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. Build step 'Execute shell' marked build as failure@medyagh any clue why the build tries to add this remote?
the issue is here https://github.com/kubernetes/minikube/blob/master/hack/jenkins/build_iso.sh#L88
@vtri950 please rebase on master. The gluster package is already removed in master for long time. This may also fix the build iso job.
After rebase we see:
% git show --stat commit 113d9e35755e9b5a7aea6537aded9ef3a56605bd (HEAD -> upgrade-buildroot-gcc13) Author: Vidit Tripathi <[email protected]> Date: Thu Nov 27 08:13:57 2025 +0530 Upgrade Buildroot to 2025.08.2 for GCC 13+ compatibility Makefile | 2 +- deploy/iso/minikube-iso/package/Config.in | 1 - deploy/iso/minikube-iso/package/podman/Config.in | 11 ----------- deploy/iso/minikube-iso/package/podman/override.conf | 4 ---- deploy/iso/minikube-iso/package/podman/podman.conf | 1 - deploy/iso/minikube-iso/package/podman/podman.hash | 7 ------- deploy/iso/minikube-iso/package/podman/podman.mk | 81 --------------------------------------------------------------------------------- 7 files changed, 1 insertion(+), 106 deletions(-)
done
/ok-to-build-iso
/cc @afbjorklund
Upgrade Buildroot to 2025.08.2 for GCC 13+ compatibility
Fixes #21967
Changes:
- Upgrade BUILDROOT_BRANCH from 2025.02 to 2025.08.2
Latest push change version to 2025.02.8
- Remove custom gluster package (requires deprecated Python 2)
This was already in master for a while
- Remove custom podman package (now built-in to Buildroot 2025.08.2)
The newer Buildroot version includes updated package versions that are compatible with modern GCC 13+ toolchains, eliminating build failures on recent Linux distributions like Fedora 39+.
Tested on Fedora 43 with GCC 13.3.1.
Did you test latest push with with Fedora 43?
Please update the PR message to reflect the actual change. If you are still testing it you can convert it to draft.
/ok-to-build-iso
Hi @vtri950, building a new ISO failed for Commit 8a091861416f76b9f5bdd508f6cdc2043fdfde6e See the logs at: https://storage.cloud.google.com/minikube-builds/logs/21997/8a09186/iso_build.txt
Hi @vtri950, building a new ISO failed for Commit 8a09186 See the logs at: https://storage.cloud.google.com/minikube-builds/logs/21997/8a09186/iso_build.txt
@vtri950 you can ignore this message, it is for the commit you replaced. We will get a new comment when the current build will complete.
The build log will be here: https://storage.cloud.google.com/minikube-builds/logs/21997/6232705/iso_build.txt
It seems like this build issue is specific to Fedora*, but probably a good idea to update the OS minor anyway...
* The default build environment uses Debian, currently bookworm (12, oldstable): support/docker/Dockerfile
There was some earlier attempt to bump the Buildroot version, not sure what happened to that one?
- https://github.com/kubernetes/minikube/issues/20993
There was some earlier attempt to bump the Buildroot version, not sure what happened to that one?
We did not attempt to resolve this issue yet. @vtri950 you can add this issue to this PR.
Upgrade Buildroot to 2025.08.2 for GCC 13+ compatibility Fixes #21967 Changes:
- Upgrade BUILDROOT_BRANCH from 2025.02 to 2025.08.2
Latest push change version to 2025.02.8
- Remove custom gluster package (requires deprecated Python 2)
This was already in master for a while
- Remove custom podman package (now built-in to Buildroot 2025.08.2)
The newer Buildroot version includes updated package versions that are compatible with modern GCC 13+ toolchains, eliminating build failures on recent Linux distributions like Fedora 39+. Tested on Fedora 43 with GCC 13.3.1.
i have updated the description
This is something to watch out for in general, when replacing minikube packages with buildroot packages:
-config BR2_PACKAGE_PODMAN
- bool "podman"
- default y
config BR2_PACKAGE_PODMAN
bool "podman"
That is, the minikube packages were selected by default - while buildroot packages need to be selected:
BR2_PACKAGE_PODMAN=y
Since minikube is already providing buildroot defconfig, having packages auto-select themselves was a mistake. Having most packages default to using external pre-built binaries was also a mistake, albeit a more deliberate one.
It would have been nice to work closer with Buildroot upstream, and submit more packages for inclusion there.
The default podman configuration can only be accessed by root, so it will not work for docker@minikube
It would be possible to move this configuration outside the package though, and use systemd overrides.
/etc/systemd/system/podman.socket.d/override.conf
/etc/tmpfiles.d/podman.conf
@vtri950 based on @afbjorklund comments, I think we should split the change to use the builtin podman package to another change. If updating buildroot version to 2025.02.8 fixes the build on Fedora 43 this is good enough. We can work on better podman package later.
We can work on better podman package later.
You could also use podman 4.9 with CNI support, if you want to avoid the Rust dependencies.
The network needs of a podman build are quite limited, so it doesn't need anything fancy...
i.e. since the image already has the cni plugins?
The alternative to using the upstream package, is to rename your own to something different.
But if you are staying with the 2025.02 LTS version, that is mostly an issue for 2026.02 LTS later
@vtri950 I think the issue causing the ISO build failure is that minikube build script is assuming that your fork is named "minikube":
+ git remote add vtri950 [email protected]:vtri950/minikube.git
+ git fetch vtri950
ERROR: Repository not found.
fatal: Could not read from remote repository.
But your fork is named minikube-fork.
This is a bug in minikube build script or jenkins configuration, you can open an issue about it.
To continue with this change, rename your fork to "minikube".
/ok-to-build-iso
Hi @vtri950, we have updated your PR with the reference to newly built ISO. Pull the changes locally if you want to test with them or update your PR further.
/ok-to-test
kvm2 driver with docker runtime
┌────────────────┬──────────┬────────────────────────┐
│ COMMAND │ MINIKUBE │ MINIKUBE ( PR 21997 ) │
├────────────────┼──────────┼────────────────────────┤
│ minikube start │ 42.1s │ 41.0s │
│ enable ingress │ 15.7s │ 15.9s │
└────────────────┴──────────┴────────────────────────┘
Times for minikube (PR 21997) ingress: 16.3s 15.7s 15.9s 15.9s 15.8s Times for minikube ingress: 15.3s 15.7s 15.8s 15.8s 15.8s
Times for minikube (PR 21997) start: 42.5s 41.8s 39.9s 41.5s 39.4s Times for minikube start: 41.0s 44.0s 42.1s 41.1s 42.1s
docker driver with docker runtime
┌───────────────────┬──────────┬────────────────────────┐
│ COMMAND │ MINIKUBE │ MINIKUBE ( PR 21997 ) │
├───────────────────┼──────────┼────────────────────────┤
│ minikube start │ 22.2s │ 22.4s │
│ ⚠️ enable ingress │ 17.2s │ 22.9s ⚠️ │
└───────────────────┴──────────┴────────────────────────┘
Times for minikube start: 19.3s 21.5s 23.8s 22.4s 23.8s Times for minikube (PR 21997) start: 22.4s 23.5s 20.9s 21.8s 23.2s
Times for minikube ingress: 13.6s 10.6s 40.6s 10.6s 10.6s Times for minikube (PR 21997) ingress: 10.6s 72.1s 9.6s 10.6s 11.6s
docker driver with containerd runtime
┌────────────────┬──────────┬────────────────────────┐
│ COMMAND │ MINIKUBE │ MINIKUBE ( PR 21997 ) │
├────────────────┼──────────┼────────────────────────┤
│ minikube start │ 20.2s │ 21.7s │
│ enable ingress │ 20.3s │ 20.5s │
└────────────────┴──────────┴────────────────────────┘
Times for minikube start: 22.2s 20.3s 18.0s 18.4s 22.2s Times for minikube (PR 21997) start: 21.0s 21.6s 22.2s 22.8s 20.9s
Times for minikube (PR 21997) ingress: 20.1s 21.1s 20.1s 20.1s 21.1s Times for minikube ingress: 21.1s 20.1s 20.1s 20.1s 20.1s
@vtri950 tests look good:
- KVM_Linux KVM_Linux — Jenkins: completed with success in 63.71 minutes.
- KVM_Linux_containerd KVM_Linux_containerd — Jenkins: completed with success in 74.33 minutes.
Other tests are not relevant (docker*) or known to fail (KVM_cLinux_crio).
Consider updating the building iso docs to document that we can build also on Fedora 43. We can have a section like "Building on Fedora". https://minikube.sigs.k8s.io/docs/contrib/building/iso/
Here are the number of top 10 failed tests in each environments with lowest flake rate.
| Environment | Test Name | Flake Rate |
|---|
Besides the following environments also have failed tests:
-
Docker_Linux_containerd_arm64: 25 failed (gopogh)
-
KVM_Linux_crio: 3 failed (gopogh)
-
Docker_Linux_crio: 48 failed (gopogh)
-
Docker_Linux_crio_arm64: 57 failed (gopogh)
To see the flake rates of all tests by environment, click here.
Consider updating the building iso docs to document that we can build also on Fedora 43. We can have a section like "Building on Fedora". https://minikube.sigs.k8s.io/docs/contrib/building/iso/
done
kvm2 driver with docker runtime
┌────────────────┬──────────┬────────────────────────┐
│ COMMAND │ MINIKUBE │ MINIKUBE ( PR 21997 ) │
├────────────────┼──────────┼────────────────────────┤
│ minikube start │ 39.7s │ 38.4s │
│ enable ingress │ 15.6s │ 15.1s │
└────────────────┴──────────┴────────────────────────┘
Times for minikube start: 38.5s 38.7s 37.6s 44.2s 39.5s Times for minikube (PR 21997) start: 37.9s 37.7s 37.9s 39.0s 39.6s
Times for minikube ingress: 15.2s 15.7s 15.7s 16.2s 15.2s Times for minikube (PR 21997) ingress: 14.7s 16.2s 15.2s 14.7s 14.7s
docker driver with docker runtime
┌────────────────┬──────────┬────────────────────────┐
│ COMMAND │ MINIKUBE │ MINIKUBE ( PR 21997 ) │
├────────────────┼──────────┼────────────────────────┤
│ minikube start │ 23.4s │ 22.2s │
│ enable ingress │ 10.6s │ 10.6s │
└────────────────┴──────────┴────────────────────────┘
Times for minikube (PR 21997) start: 23.4s 21.6s 20.9s 24.7s 20.3s Times for minikube start: 24.6s 24.3s 23.2s 20.7s 24.1s
Times for minikube ingress: 10.6s 10.6s 10.6s 10.6s 10.6s Times for minikube (PR 21997) ingress: 10.6s 10.6s 10.6s 10.6s 10.6s
docker driver with containerd runtime
┌────────────────┬──────────┬────────────────────────┐
│ COMMAND │ MINIKUBE │ MINIKUBE ( PR 21997 ) │
├────────────────┼──────────┼────────────────────────┤
│ minikube start │ 18.7s │ 19.2s │
│ enable ingress │ 20.3s │ 20.3s │
└────────────────┴──────────┴────────────────────────┘
Times for minikube start: 17.2s 18.2s 18.5s 21.7s 18.0s Times for minikube (PR 21997) start: 18.9s 18.7s 17.8s 19.0s 21.8s
Times for minikube ingress: 20.1s 20.1s 20.1s 21.1s 20.1s Times for minikube (PR 21997) ingress: 20.1s 21.1s 20.1s 20.1s 20.1s
Here are the number of top 10 failed tests in each environments with lowest flake rate.
| Environment | Test Name | Flake Rate |
|---|
Besides the following environments also have failed tests:
-
Docker_Linux_containerd_arm64: 34 failed (gopogh)
-
Docker_Linux_crio_arm64: 57 failed (gopogh)
-
KVM_Linux_crio: 3 failed (gopogh)
-
Docker_Linux_crio: 48 failed (gopogh)
To see the flake rates of all tests by environment, click here.