hierarchical-namespaces icon indicating copy to clipboard operation
hierarchical-namespaces copied to clipboard

HNC: Multiarch images (e.g. arm64)

Open JohannesLamberts opened this issue 2 years ago • 3 comments

Hi,

we just ran into issue that the image gcr.io/k8s-staging-multitenancy/hnc-manager:v1.0.0 would not start in a ARM-Cluster.

The issue has already been raised in https://github.com/kubernetes-sigs/multi-tenancy/issues/1383 and should have been fixed by https://github.com/kubernetes-sigs/hierarchical-namespaces/pull/45 on 5 Jul 2021.

I'm not quite sure, if the particular image that I'm referring to is unrelated to the PR mentioned above or if I'm missing something else.

When inspecting the image via docker inspect gcr.io/k8s-staging-multitenancy/hnc-manager:v1.0.0 the only architecture I get is amd64. Same goes for the latest tag master-34837c2.

Can you provide any insight into whether multi-arch images are available and I missed some crucial part of information or whether there are no multi-arch images available at the moment and we should switch to AMD until such an image is ready?

Thanks!

JohannesLamberts avatar Apr 19 '22 15:04 JohannesLamberts

Unfortunately multi-arch builds were an experiment that were also mixed in with a feature we've decided not to pursue (Github workflows) so we've removed them for the time being. They were never officially released as part of our build.

If you can switch to AMD that would be the best way for you to get started with HNC. We'd also be interested in any PRs you could make to our existing build process (e.g. fixes to make build) that would add multiarch images, as opposed to entirely new additions such as Github workflows. But we don't have any way to test this ourselves.

On Tue, Apr 19, 2022 at 11:21 AM Johannes Lamberts @.***> wrote:

Hi,

we just ran into issue that the image gcr.io/k8s-staging-multitenancy/hnc-manager:v1.0.0 would not start in a ARM-Cluster.

The issue has already been raised in kubernetes-sigs/multi-tenancy#1383 https://github.com/kubernetes-sigs/multi-tenancy/issues/1383 and should have been fixed by kubernetes-sigs/hierarchical-namespaces#45 https://github.com/kubernetes-sigs/hierarchical-namespaces/pull/45 on 5 Jul 2021 https://github.com/kubernetes-sigs/hierarchical-namespaces/pull/45#event-4979190875 .

I'm not quite sure, if the particular image that I'm referring to is unrelated to the PR mentioned above or if I'm missing something else.

When inspecting the image via docker inspect gcr.io/k8s-staging-multitenancy/hnc-manager:v1.0.0 the only architecture I get is amd64. Same goes for the latest tag master-34837c2.

Can you provide any insight into whether multi-arch images are available and I missed some crucial part of information or whether there are no multi-arch images available at the moment and we should switch to AMD until such an image is ready?

Thanks!

— Reply to this email directly, view it on GitHub <kubernetes-sigs/hierarchical-namespaces#201>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AE43PZF7BLLQB2JC7GG5UIDVF3FQ3ANCNFSM5TZGOWLA . You are receiving this because you are subscribed to this thread.Message ID: @.***>

adrianludwin avatar Apr 19 '22 15:04 adrianludwin

I was working on this and our current build pipeline is setup for this but I was stuck on some QEMU issues. I haven't had time to look at it all, but if you have some suggestions I'm happy to hear it.

If you can get the cloudbuild-build.yaml to work with ARM in cloud build, I'll happily integrate it. As of now, I can get it working on my mac just fine, but not in cloud build. I've also not spent a huge amount of time on this either so it may be something simple I missed.

rjbez17 avatar Apr 20 '22 17:04 rjbez17

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 19 '22 17:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 18 '22 18:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

k8s-triage-robot avatar Sep 17 '22 18:09 k8s-triage-robot

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Sep 17 '22 18:09 k8s-ci-robot

@rjbez17 any chance you got around back to this :D? sadly I'm not able to help out on the issue, but would really love a arm64 build as well for my k3s cluster :)

Or maybe @adrianludwin since april last year have there been any talks about perhaps supporting this officially?

Nopzen avatar Mar 17 '23 17:03 Nopzen

/remove-lifecycle rotten

I can take another look now that its been a while, but don't have a ton of time to dedicate to the project at the moment.

rjbez17 avatar Mar 17 '23 19:03 rjbez17

@rjbez17 No worries, I would love to help but I'm not sure if I would be of great help, as I dont have access to the GCP Cloudbuild.

Nopzen avatar Mar 18 '23 09:03 Nopzen