cluster-api-provider-aws
cluster-api-provider-aws copied to clipboard
Generating dependency report fails on release
/kind bug
What steps did you take and what happened:
When trying to do recent releases of v2.3.x & v2.4.0-beta.0 we encountered the following problem when generating the dependency report for the release notes:
ERRO unable to run cmd: go list -mod=readonly -m all, workdir: /tmp/go-modiff3838785554, stdout: , stderr: go: updates to go.mod needed, disabled by -mod=readonly; to update it:
go mod tidy
, error: exit status 1 file="modiff/modiff.go:276"
FATA generating dependency report: getting dependency changes: unable to run cmd: go list -mod=readonly -m all, workdir: /tmp/go-modiff3838785554, stdout: , stderr: go: updates to go.mod needed, disabled by -mod=readonly; to update it:
go mod tidy
, error: exit status 1 file="release-notes/main.go:193
Digging into it (and looking at the temporary directory created) its requiring changes to go.mod including:
toolchain go1.21.5
However, adding this causes the staging image build to fail with:
/workspace/hack/tools/go.mod:5: unknown directive: toolchain
go: errors parsing go.mod:
What did you expect to happen:
The build of the staging image and release notes should succeed.
Anything else you would like to add:
Short term we can disable the dependency report generation. Then for the future we can look to do:
- add the `toolchain1 back into go mod
- Upgrade the gcb image
Environment:
- Cluster-api-provider-aws version:
- Kubernetes version: (use
kubectl version): - OS (e.g. from
/etc/os-release):
/triage accepted /lifecycle active
There is an issue with release-notes. It turns out if the previous version (i.e specified in --start-sha) was for a version of go prior to 1.21 and the current version (i.e. specified in --end-sha) is now v1.21 then when it checks out the starting sha there is no toolchain directive and so when go list -mod=readonly -m all is run it tries to update go.mod to add toolchain and fails.
We will probably have to disable the dependency report generation short term using --dependencies=false and then re-enable it again in the next release.
I will also investigate if we can influence this behaviour via env vars.
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.