external-dns
external-dns copied to clipboard
Add maintainers for Infoblox provider
Hi,
@Raffo @njuettner @seanmalloy
I am Ranjish, Engineering Manager from Infoblox taking care of Cloud initiatives. We have few customers who are using ExternalDNS plugin to connect with Infoblox. Being Infoblox provider is in alpha state, we would like to take to stable state after through validation. We are also looking for some enhancements which our customers are looking for. So it will great if we get the maintainer access
ranjishmp skudriavtsev anagha-infoblox
Thanks, Ranjish
@ranjishmp thanks for reaching out. The first step is to get kubernetes-sigs org membership. Here are the requirements for org membership: https://github.com/kubernetes/community/blob/master/community-membership.md#member
From my perspective the simplest way to meet the requirements for org membership would be to submit some PRs to make changes(features or bug fixes) to the external-dns infoblox provider.
I'd be more than happy to be a sponsor for anyone interesting in pursuing org membership. Let me know if you have any questions on the org membership process.
Thanks!
@ranjishmp Same as Sean says, I’m happy to sponsor this. Feel free to send me PRs to review so that we can get you the minimal amount of work to support a case to join the org.
Thanks @Raffo and @seanmalloy
Sergey (skudriavtsev) will be the main contributor from Infoblox side towards this plugin
He already raised a PR (https://github.com/kubernetes-sigs/external-dns/pull/2670) which is under your review. He will soon raise one more PR for adding certificate based authentication
So it will be good if you can consider him as a maintainer, when you guys feel comfortable
We are also in the process of validating Infoblox provider part from our side. Hope this will help us to take it to beta and then to stable state.
@sagor999 Are you still interested in helping maintaining the infoblox provider? I remember we chatter about it a few months ago and I never followed up adding you to the owners.
@ranjishmp I prepared https://github.com/kubernetes-sigs/external-dns/pull/2713 , which we will be able to merge once we have enough PRs merged from @skudriavtsev and the membership to the org approved.
@Raffo Thank you. As I mentioned @skudriavtsev is getting ready with his second PR. We will have more PRs once our QA team start validating the plugin.
@Raffo I have changed jobs and no longer have access to infoblox setup. So do not add me as a maintainer. But thanks for reaching out!
@sagor999 thanks for the answer and best wishes for the new job!
@Raffo @njuettner @seanmalloy
As I mentioned before our QA team started validating Infoblox provider plugin and reported quite some some bugs. How should we take it forward? @skudriavtsev is woking on few them.
@ranjishmp i think I can work to get the membership approved and then you can be free to work on fixing and reviewing those. I will need at the minimum next week as one of the other maintainers that can help me is out of office.
@ranjishmp https://github.com/kubernetes-sigs/external-dns/pull/2755 was approved, do you have any other PR that you want to get in to add yourself as maintainers of the provider?
@Raffo cc @skudriavtsev Yes, we have few more which are under internal testing now. Will raise them soon
Hi @Raffo cc @skudriavtsev
We created this PR and is pending for review - https://github.com/kubernetes-sigs/external-dns/pull/2841
This PR contains support for certificate based authentication and bugfixes
Hi,
@Raffo @njuettner @seanmalloy
Any update on when we can get the maintainer access ? There are few PRs we raised which address good amount of issues that we identified during our internal testing. Also added the capability for certificate based authentication
I can see the status for @Raffo as taking a break from opensource. So anyone else can help here ?
Hi,
@Raffo @njuettner @seanmalloy
Any update on when we can get the maintainer access ? There are few PRs we raised which address good amount of issues that we identified during our internal testing. Also added the capability for certificate based authentication
I can see the status for @Raffo as taking a break from opensource. So anyone else can help here ?
I'll try to review #2841 soon. I think once that gets merged we should be able to add you to the kubernetes-sigs org.
Hi @seanmalloy
You got a chance to review the PR - https://github.com/kubernetes-sigs/external-dns/pull/2841?
Hi,
@seanmalloy @Raffo @njuettner
Can you please review the PR - https://github.com/kubernetes-sigs/external-dns/pull/2841?
It will be really helpful for us if we can get the maintainer access.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.