cluster-api
cluster-api copied to clipboard
bad architecture metadata in k8s.gcr.io images
What steps did you take and what happened:
The arch metadata in the k8s.gcr.io/cluster-api/xxx
are incorrect.
For example:
$ docker pull --platform linux/arm64/v8 k8s.gcr.io/cluster-api/cluster-api-controller:v1.2.0
$ docker inspect k8s.gcr.io/cluster-api/cluster-api-controller:v1.2.0 | grep Arch
"Architecture": "amd64",
$ docker pull --platform linux/ppc64le k8s.gcr.io/cluster-api/cluster-api-controller:v1.2.0
$ docker inspect k8s.gcr.io/cluster-api/cluster-api-controller:v1.2.0 | grep Arch
"Architecture": "amd64",
The other images are correct:
$ docker pull --platform linux/arm64/v8 k8s.gcr.io/pause:3.7
$ docker inspect k8s.gcr.io/pause:3.7 | grep Arch
"Architecture": "arm64",
What did you expect to happen:
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- Cluster-api version: v1.2.0
- minikube/kind version:
- Kubernetes version: (use
kubectl version
): - OS (e.g. from
/etc/os-release
):
/kind bug [One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels]
@24sama: This issue is currently awaiting triage.
If CAPI contributors determines this is a relevant issue, they will accept it by applying the triage/accepted
label and provide further guidance.
The triage/accepted
label can be added by org members by writing /triage accepted
in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Has anyone encountered this problem?
Interesting finding.
This comes from https://github.com/kubernetes-sigs/cluster-api/blob/9663ed6ab6dedfc7a343430976d4442c6aae4395/Dockerfile#L63 which then will use the platform it is building on.
We already have the ARCH build-arg
which could get used to define --platform
in the FROM
line which fixes this.
I will create a PR shortly :-)