cluster-api-provider-openstack icon indicating copy to clipboard operation
cluster-api-provider-openstack copied to clipboard

A better way to sync with other community?

Open jichenjc opened this issue 2 years ago • 7 comments

/kind feature

Describe the solution you'd like [A clear and concise description of what you want to happen.]

inspired by #1092 we need update according to cluster-api, openstack from various places not sure whether other community has such kind of sync mechanism ? as from process is much better than randomly check...

so hope to get some ideas from folks..

Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]

jichenjc avatar Dec 16 '21 02:12 jichenjc

For Metal³ and CAPM3 we use Renovate for some automated dependency management. It can create PRs for bumping versions when there are new releases available. Maybe it could be useful for CAPO also?

lentzi90 avatar Mar 07 '22 07:03 lentzi90

We plan to discuss this issue during the next office hours, 2022-04-23.

apricote avatar Mar 18 '22 15:03 apricote

We discussed this in the office hours meeting today. We discussed this specifically in the context of CAPI, which is where we have to make the most frequent integration changes.

We're very much in favour of an automated mechanism which would notify us of required actions based on one of our dependencies. Renovate/Dependabot might be useful here, but we suspect that a major bump is more likely to require manual action, including reading updates to the CAPI book. A bot can't do that for us. Can we have an issue raised for major bumps and a PR for minor bumps? Or we could just treat the PR like an issue.

Unfortunately although everybody was in favour, nobody present at the meeting was able to commit to working on it. @jichenjc is this something you were thinking of working on?

mdbooth avatar Mar 23 '22 14:03 mdbooth

is this something you were thinking of working on?

I can't commit either :( as I am actually part time work on those stuffs and has own job maybe we can cooperate together in finding a better way e.g several people work together ?

I agree with all the above comments about the approach on bump and Renovate/Dependabot usage, thanks

jichenjc avatar Mar 24 '22 09:03 jichenjc

/assign @lentzi90

lentzi90 avatar Jun 15 '22 13:06 lentzi90

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Sep 13 '22 14:09 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Oct 13 '22 14:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-triage-robot avatar Nov 12 '22 15:11 k8s-triage-robot

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

k8s-ci-robot avatar Nov 12 '22 15:11 k8s-ci-robot