node-problem-detector
node-problem-detector copied to clipboard
Add hakman to the approvers list
This should allow me to create releases.
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: hakman Once this PR has been reviewed and has the lgtm label, please assign vteratipally for approval. For more information see the Kubernetes Code Review Process.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve
in a comment
Approvers can cancel approval by writing /approve cancel
in a comment
/cc @vteratipally /assign @vteratipally
cc @derekwaynecarr , @wangzhen127 In case you can approve. My team is keen to get this release :-)
/cc @dchen1107
Could you please share the contribution details based on the approver policy. https://github.com/kubernetes/community/blob/master/community-membership.md
@vteratipally The immediate need is for the generation of the 0.8.16 release, to fix CVEs and (I believe) at least one other issue. @hakman has completed the PR and the tag is applied, but he doesn't currently have permission to make an official release.
If you think this request here to add him to the approvers list may be delayed or not approved, would it be possible for your yourself to generate the Release, as you did for the previous one.
Further discussion, if needed, is here https://kubernetes.slack.com/archives/CJA25LM6D/p1709062180808009
I haven't worked on NPD for the past few years. But I am starting to pick it up again recently. If the request is to make a new release, I can do that this time. I will sync up with @vteratipally on it.
Regarding the approvers, since I lack the knowledge on NPD work progress in recent years, I will defer it to @vteratipally.
Thanks @wangzhen127 . Yes, that is the request. To make a new release based on the 0.8.16 tag that @hakman has already created.
OK. I will likely be able to make the 0.8.16 release early next week. Does that sound good?
Yes, we will be creating a new release early next week. Hope that should be fine.
Thanks @wangzhen127 and @vteratipally. Before you cut that release, can you please take a look at this issue which I just logged: #871. We are seeing some brand new CVEs in the docker image that @hakman made a couple of days ago. Can these new CVEs be fixed, with the fixes included in the new release? They are reported as "HIGH" priority by our scanner.
If those fixes can be included in a release early next week, that would be great. Thanks.
Yes sure, we can fix them
@JohnRusk I understand the need for a new release. Thank you for your support, but I would like to keep this issue about adding myself to the approvers list.
@vteratipally I understand you have concerns about this PR. Would you mind contacting me in private and discussing them? Thanks!
@JohnRusk I understand the need for a new release. Thank you for your support, but I would like to keep this issue about adding myself to the approvers list.
@vteratipally I understand you have concerns about this PR. Would you mind contacting me in private and discussing them? Thanks!
Yes sure. We could set up some meeting.
Yes sure. We could set up some meeting.
Sure, please ping me in Slack and we can find some time that suits us both.
@hakman: The following tests failed, say /retest
to rerun all failed tests or /retest-required
to rerun all mandatory failed tests:
Test name | Commit | Details | Required | Rerun command |
---|---|---|---|---|
pull-npd-e2e-kubernetes-gce-ubuntu-custom-flags | 704e03dc09ecf319eb3cb698a33fa1f8c76ec2cd | link | true | /test pull-npd-e2e-kubernetes-gce-ubuntu-custom-flags |
pull-npd-e2e-kubernetes-gce-ubuntu | 704e03dc09ecf319eb3cb698a33fa1f8c76ec2cd | link | true | /test pull-npd-e2e-kubernetes-gce-ubuntu |
pull-npd-e2e-kubernetes-gce-gci | 704e03dc09ecf319eb3cb698a33fa1f8c76ec2cd | link | true | /test pull-npd-e2e-kubernetes-gce-gci |
pull-npd-e2e-kubernetes-gce-gci-custom-flags | 704e03dc09ecf319eb3cb698a33fa1f8c76ec2cd | link | true | /test pull-npd-e2e-kubernetes-gce-gci-custom-flags |
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle stale
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the PR is closed
You can:
- Mark this PR as fresh with
/remove-lifecycle rotten
- Close this PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten