add aws lambda function to send patch cherry pick notification
What type of PR is this?
/kind feature
What this PR does / why we need it:
- add aws lambda function to send patch cherry pick notification
We will apply the terraform manually by logging in the aws account we have for sig-release. maybe in the future we can automation with github actions and OIDC federation.
It is in a mock by default to we test the email sending, and then we will need to open a ticket with aws to move the aws ses config to prod then we can send emails to the [email protected] (i will do that as a follow up)
sample email that will be sent by the automation (note this is valid for the next cycle.
Hello Kubernetes Community!
The cherry-pick deadline for the 1.30 branches is 2024-06-07 EOD PT.
The cherry-pick deadline for the 1.29 branches is 2024-06-07 EOD PT.
The cherry-pick deadline for the 1.28 branches is 2024-06-07 EOD PT.
The cherry-pick deadline for the 1.27 branches is 2024-06-07 EOD PT.
Here are some quick links to search for cherry-pick PRs:
- release-1.30: https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+is%3Aopen+base%3Arelease-1.30+label%3Ado-not-merge%2Fcherry-pick-not-approved
- release-1.29: https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+is%3Aopen+base%3Arelease-1.29+label%3Ado-not-merge%2Fcherry-pick-not-approved
- release-1.28: https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+is%3Aopen+base%3Arelease-1.28+label%3Ado-not-merge%2Fcherry-pick-not-approved
- release-1.27: https://github.com/kubernetes/kubernetes/pulls?q=is%3Apr+is%3Aopen+base%3Arelease-1.27+label%3Ado-not-merge%2Fcherry-pick-not-approved
For PRs that you intend to land for the upcoming patch sets, please ensure they have:
- a release note in the PR description
- /sig
- /kind
- /priority
- /lgtm
- /approve
- passing tests
Details on the cherry-pick process can be found here:
https://git.k8s.io/community/contributors/devel/sig-release/cherry-picks.md
We keep general info and up-to-date timelines for patch releases here:
https://kubernetes.io/releases/patch-releases/#upcoming-monthly-releases
If you have any questions for the Release Managers, please feel free to reach out to us at #release-management (Kubernetes Slack) or [[email protected]](mailto:[email protected])
We wish everyone a happy and safe week!
SIG-Release Team
/assign @saschagrunert @xmudrii @puerco cc @kubernetes/release-managers
Which issue(s) this PR fixes:
Fixes #2174
Special notes for your reviewer:
Does this PR introduce a user-facing change?
add aws lambda function to send patch cherry pick notification
/assign @saschagrunert @xmudrii @puerco
/hold
Putting my SIG K8s Infra contributor hat on
The AWS account that we are using is part of the Kubernetes AWS organization (or at least it should be) which is managed by SIG K8s Infra. At the moment, SIG K8s Infra don't have very strict rules about how accounts under the organization should be managed, but there are some (more or less stronger) recommendations.
Speaking of the infrastructure part, it's recommended that everything related to the infrastructure, especially to the long-term infrastructure, lives in kubernetes/k8s.io repo, even if it's managed by other SIGs. That's because of multiple reasons:
- This ensures that SIG K8s Infra has an idea what's running in each account, which is very helpful for eventual auditing and technical support
- SIG Release can use the infrastructure built by SIG K8s Infra to securely manage the infrastructure. For example, SIG K8s Infra is working on a new CI/CD pipeline for Terraform that would make it much easier to apply changes, and it's likely that such a pipeline will only work in k/k8s.io
This might be kind of a new thing for SIG K8s Infra too (as in SIGs maintaining their own infrastructure and Terraform configs), so I'm going to cc some of infra folks if they have any stronger opinions: @upodroid @BenTheElder @dims
Putting my SIG Release contributor hat on
I'm a little bit more towards the infrastructure code living in k/k8s.io for the sake of having all the configs in the single place. For the reference, all the infrastructure code related to pkgs.k8s.io is already living in k/k8s.io and is pretty much completely managed and maintained by us. If we have different pieces of the infrastructure in different places, there's a more significant risk that some parts of the infrastructure gets a little bit neglected.
thanks, i will split it
it's recommended that everything related to the infrastructure, especially to the long-term infrastructure, lives in kubernetes/k8s.io repo, even if it's managed by other SIGs
+100
@xmudrii @dims terraform code moved to k8s.io: https://github.com/kubernetes/k8s.io/pull/6853
+1 to https://github.com/kubernetes/release/pull/3627#issuecomment-2139335855
Yeah, k8s.io is a clearing house for "how did/do we setup this infra?", the implementation sources for components shouldn't live there but the cloud / infra configuration does and we have strongly delegated ownership to sub-accounts, directories, etc. We hope to have more automation and self service over time and we'll need to be careful to secure it so we want to avoid sprawl.
Thanks for splitting this.
Also: Thanks for taking a moment to automate away toil and keep the community better informed about deadlines ❤️
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: cpanato, xmudrii
The full list of commands accepted by this bot can be found here.
The pull request process is described here
- ~~OWNERS~~ [cpanato,xmudrii]
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
@cpanato I'll leave it up to you to remove the hold when ready :)
/unhold
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
/remove-lifecycle rotten /reopen
@cpanato: Reopened this PR.
In response to this:
/remove-lifecycle rotten /reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
New changes are detected. LGTM label has been removed.
@xmudrii i fixed the linter erros and merge conflicts, this should be ready to go can you PTAL and reapprove?
/cc @cpanato
@puerco: GitHub didn't allow me to request PR reviews from the following users: cpanato.
Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs.
In response to this:
@xmudrii i fixed the linter erros and merge conflicts, this should be ready to go can you PTAL and reapprove?
/cc @cpanato
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
Id' suggest you stop drinking @k8s-ci-robot :D
i will get back to this and complete this work soon, need to revise the cloudformation as was pointed out to be used
PR needs rebase.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten - Close this PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.