How to factorize code between kops and cluster api?
/kind feature
1. Describe IN DETAIL the feature/behavior/change you would like to see.
I would like to have documentation about how to have reusable components between kops and cluster api controller. For those both tools there is a lot of overlap in the preparation of a cluster. Is there any documentation/recommendation about how to reuse code between those two projects?
2. Feel free to provide a design supporting your feature request.
I would like to have documentation about the core functions/interfaces that would need to be documented to permit maximum reusability between kops and cluster api.
/assign @justinsb /kind office-hours
This is a great question. I don't think we have a "do it this way" answer, but here's my suggestion....
There are broadly two levels in the code; in kOps we have the tasks and the model layer. I think the tasks layer is essentially trying to create a declarative/idempotent abstraction of the cloud provider's RESTful APIs, and the model layer is translating kOps to those tasks.
I think the tasks layer is the best opportunity for reuse. The cloud-provider code would benefit from it, cluster-api would benefit from it, and tasks would also benefit from it. I also work on KCC , which is a set of operators for managing cloud resources on GCP (and AWS has ACK, Azure has ASO), those also map pretty naturally to the tasks layer.
This doesn't necessarily mean reuse of the task layer directly (though it would be an interesting refactor to try once we've done the 1.30 beta and can do big refactors on the main branch). At its most basic it could be copy-and-paste reuse and then we can see whether it is worthwhile doing the bigger refactor.
The idea though is that generally the work to add kOps / cluster-api / cloud-provider support can be split into the work to "translate" those APIs to the cloud-provider APIs, and then the work to drive those cloud-provider APIs in a way compatible with the Kubernetes reconciliation/declarative model. Because the "mapping" involves a different source API, it's always going to be hard to reuse, but the "reconciliation" layer has the same "target" and the same representation (the output of the mapping), so should be much easier to reproduce. My theory is that the reconciliation layer is also more time consuming to implement and generally where there are more bugs, but that is only a hypothesis.
The Kubernetes operator layer is an interesting additional opportunity. I don't know if anyone has tried building cluster-api / cloud-provider support on top of operators; I'm pretty sure nobody has for kOps because it would need some tricks to fake client.Client. But I do think that is possible, so if you wanted to investigate that I'd be very interested in exploring that with you!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.