k8s.io icon indicating copy to clipboard operation
k8s.io copied to clipboard

add mailing list members migration functionality

Open Priyankasaggu11929 opened this issue 1 year ago • 13 comments

Description

This PR attempts to add functionality for migrating mailing list members across two google groups.

  • add MigrateMailingListMembers function in groups/service.go to implement the migration logic

  • add PerformMailingListMigration function function and new flags ( --migrate, --destinationGroup, --sourceGroup) in groups/service.go to trigger mailing list members migration

  • add a new Makefile target –migrate-members to execute the migration with specified source and destination groups as:

    make migrate-members -- --sourceGroup "[email protected]" --destinationGroup "[email protected]" --migrate --confirm
    
  • add test TestMigrateMailingListMembers in groups/service_test.go

Note:

I haven't been able to test it yet due to having no credentials for API calls. Once it's reviewed and merged, I'll try it in a prow job similar to existing ones.

/assign @mrbobbytables @MadhavJivrajani (for early feedback) /sig contributor-experience

Priyankasaggu11929 avatar Sep 22 '23 11:09 Priyankasaggu11929

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Priyankasaggu11929 Once this PR has been reviewed and has the lgtm label, please assign spiffxp for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment Approvers can cancel approval by writing /approve cancel in a comment

k8s-ci-robot avatar Sep 22 '23 11:09 k8s-ci-robot

cc: @cblecker for review. Thanks! :slightly_smiling_face:

Priyankasaggu11929 avatar Sep 22 '23 15:09 Priyankasaggu11929

I haven't been able to test it yet due to having no credentials for API calls.

@Priyankasaggu11929 for testing - we should probably update the groups fake client: https://github.com/kubernetes/k8s.io/blob/main/groups/fake/fake_client.go

and then test the high level behaviour. Agreed that behaviour with actual API calls can't really be tested, but the fake client comes close.

MadhavJivrajani avatar Sep 22 '23 18:09 MadhavJivrajani

We should also add tests for this (as per your TODO): https://github.com/kubernetes/k8s.io/pull/5882#issuecomment-1731840770

added tests in the latest commit refresh.

@MadhavJivrajani, PTAL again. Thanks for the suggestions! :slightly_smiling_face:

Priyankasaggu11929 avatar Sep 23 '23 04:09 Priyankasaggu11929

@Priyankasaggu11929 How do you expect this to be run? Is it something a user (who presumably has credentials with sufficient permissions) runs on their local system?

The thought process I have looking at this:

  • There aren't many users with sufficient credentials to do this, outside of automation
  • Automation needs to be declarative in a repo in order for us to point a bot to it
  • We want owners/managers to always be declared in the yaml in the repo as authoritative
  • For some lists (like a leads list), we want all the users to always be declared in the yaml too
  • We don't want the members of some e-mail lists (like a sig list that anyone can just click the button to join) to be declared in the yaml in the repo
  • But how do we migrate an old list to a new list without declaring the individual members in the repo?
  • Also, there is throttling by Google for how many emails you can add to a list in a 24 hour period, so migrating a large list could take many runs over the course of the day. And you'd want to pick up where you left off adding folks.

cblecker avatar Oct 09 '23 00:10 cblecker

Hello @cblecker, thanks for the review!

How do you expect this to be run? Is it something a user (who presumably has credentials with sufficient permissions) runs on their local system? Automation needs to be declarative in a repo in order for us to point a bot to it

To run this, the idea is to use the new make migrate-members target in a Prow Job, similar to the existing ci-k8sio-groups job, and include serviceAccountName: gsuite-groups-manager as credentials for Google Mailing List ops.

Since the usecase is to migrate a finite number of ML (~34), I'm thinking of just hard-coding source/destination ML names in the prow job yaml itself and updating them as needed.


We want owners/managers to always be declared in the yaml in the repo as authoritative

Owners/managers will be declared as yaml, as in the existing OWNERS file.


But how do we migrate an old list to a new list without declaring the individual members in the repo?

IMU, AddOrUpdateGroupMembers pulls a list of all the members in a certain ML and add/update them if the member is not present, or if present but need updating roles.

The new MigrateMailingListMembers function will similarly pull a list of all the members from the source ML and then uses the above function to add/update them in the Destination ML.

This is my understanding atm w/o actual API call testing, but lmk if it works differently.


Also, there is throttling by Google for how many emails you can add to a list in a 24 hour period, so migrating a large list could take many runs over the course of the day. And you'd want to pick up where you left off adding folks.

MigrateMailingListMembers function will copy only new members from the source ML to Destination ML, so, all already added members will stay as is.

But the rate limiting part, I'll only be able to test once we add the prow job and have a few runs of it. Will improve accordingly from the test runs.

Priyankasaggu11929 avatar Oct 09 '23 12:10 Priyankasaggu11929

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jan 22 '24 04:01 k8s-triage-robot

/remove-lifecycle stale

Priyankasaggu11929 avatar Jan 22 '24 09:01 Priyankasaggu11929

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Apr 21 '24 10:04 k8s-triage-robot

/remove-lifecycle stale

Ping @cblecker @MadhavJivrajani for another round of review. :slightly_smiling_face:

Priyankasaggu11929 avatar Apr 22 '24 05:04 Priyankasaggu11929

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 21 '24 06:07 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Aug 20 '24 07:08 k8s-triage-robot

@palnabarun, @mrbobbytables – could you help (with steering elevated permissions) to perform the mailing list migrations?

I'll then close this PR in favor of that. Thanks!

Priyankasaggu11929 avatar Aug 20 '24 08:08 Priyankasaggu11929