cluster-api-provider-aws icon indicating copy to clipboard operation
cluster-api-provider-aws copied to clipboard

Generating dependency report fails on release

Open richardcase opened this issue 1 year ago • 6 comments
trafficstars

/kind bug

What steps did you take and what happened:

When trying to do recent releases of v2.3.x & v2.4.0-beta.0 we encountered the following problem when generating the dependency report for the release notes:

ERRO unable to run cmd: go list -mod=readonly -m all, workdir: /tmp/go-modiff3838785554, stdout: , stderr: go: updates to go.mod needed, disabled by -mod=readonly; to update it:
       go mod tidy
, error: exit status 1  file="modiff/modiff.go:276"
FATA generating dependency report: getting dependency changes: unable to run cmd: go list -mod=readonly -m all, workdir: /tmp/go-modiff3838785554, stdout: , stderr: go: updates to go.mod needed, disabled by -mod=readonly; to update it:
       go mod tidy
, error: exit status 1  file="release-notes/main.go:193

Digging into it (and looking at the temporary directory created) its requiring changes to go.mod including:

toolchain go1.21.5

However, adding this causes the staging image build to fail with:

/workspace/hack/tools/go.mod:5: unknown directive: toolchain
go: errors parsing go.mod:

What did you expect to happen:

The build of the staging image and release notes should succeed.

Anything else you would like to add:

Short term we can disable the dependency report generation. Then for the future we can look to do:

  • add the `toolchain1 back into go mod
  • Upgrade the gcb image

Environment:

  • Cluster-api-provider-aws version:
  • Kubernetes version: (use kubectl version):
  • OS (e.g. from /etc/os-release):

richardcase avatar Feb 23 '24 14:02 richardcase

/triage accepted /lifecycle active

richardcase avatar Feb 23 '24 14:02 richardcase

There is an issue with release-notes. It turns out if the previous version (i.e specified in --start-sha) was for a version of go prior to 1.21 and the current version (i.e. specified in --end-sha) is now v1.21 then when it checks out the starting sha there is no toolchain directive and so when go list -mod=readonly -m all is run it tries to update go.mod to add toolchain and fails.

richardcase avatar Feb 23 '24 17:02 richardcase

We will probably have to disable the dependency report generation short term using --dependencies=false and then re-enable it again in the next release.

I will also investigate if we can influence this behaviour via env vars.

richardcase avatar Feb 23 '24 17:02 richardcase

This issue has not been updated in over 1 year, and should be re-triaged.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

k8s-triage-robot avatar Feb 22 '25 17:02 k8s-triage-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar May 23 '25 17:05 k8s-triage-robot

/remove-lifecycle stale

richardcase avatar May 27 '25 16:05 richardcase

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Aug 25 '25 17:08 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 11 '25 10:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Nov 10 '25 11:11 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

k8s-ci-robot avatar Nov 10 '25 11:11 k8s-ci-robot