cluster-api-addon-provider-helm
cluster-api-addon-provider-helm copied to clipboard
Make targets should error if `REGISTRY` is unset
What steps did you take and what happened:
I naively ran make test-e2e-local and it failed with an error that took a bit of investigation to fix:
DOCKER_BUILDKIT=1 docker build --build-arg builder_image=docker.io/library/golang:1.21.5 --build-arg goproxy=https://proxy.golang.org,direct --build-arg ARCH=arm64 --build-arg ldflags="-X 'sigs.k8s.io/cluster-api-addon-provider-helm/version.buildDate=2024-03-04T18:11:00Z' -X 'sigs.k8s.io/cluster-api-addon-provider-helm/version.gitCommit=a63ed8ffebda615e93f8f4c48a362b4a94746b5d' -X 'sigs.k8s.io/cluster-api-addon-provider-helm/version.gitTreeState=clean' -X 'sigs.k8s.io/cluster-api-addon-provider-helm/version.gitMajor=0' -X 'sigs.k8s.io/cluster-api-addon-provider-helm/version.gitMinor=1' -X 'sigs.k8s.io/cluster-api-addon-provider-helm/version.gitVersion=v0.1.1-alpha.1.11-a63ed8ffebda61' -X 'sigs.k8s.io/cluster-api-addon-provider-helm/version.gitReleaseCommit=ca44da536ee11b5ffd27f6f1a75f8a9552266caf'" . -t gcr.io//cluster-api-helm-controller-arm64:dev
[+] Building 0.0s (0/0) docker:desktop-linux
ERROR: invalid tag "gcr.io//cluster-api-helm-controller-arm64:dev": invalid reference format
What did you expect to happen:
Perhaps the Makefile could warn (or specific targets could error) if REGISTRY isn't set. Making this clear would help contributors figure out how to run the e2e tests locally.
/kind bug
/assign @mboersma
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.