cluster-api-provider-vsphere
cluster-api-provider-vsphere copied to clipboard
Modify the template generation logic for CAPV templates
/kind feature
Describe the solution you'd like
Currently, the template generation code for kibe-vip
based templates generates a ClusterResourceSet which installs the CPI/CSI on the workload cluster. The names of these resources created by the CRS are not dynamically generated. Hence if trying to reuse the same manifest template for different workload clusters results in the same secrets/config maps(used by the CRS) being updated. This could have unintended consequences when working with multiple clusters.
Anything else you would like to add: It would be great if we could prefix the resources with the name of the cluster to enable reuse of the same templates for different workload clusters.
Environment:
- Cluster-api-provider-vsphere version: n/a
- Kubernetes version: (use
kubectl version
): n/a - OS (e.g. from
/etc/os-release
): n/a
/help /good-first-issue
@srm09: This request has been marked as suitable for new contributors.
Guidelines
Please ensure that the issue body includes answers to the following questions:
- Why are we solving this issue?
- To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
- Does this issue have zero to low barrier of entry?
- How can the assignee reach out to you for help?
For more details on the requirements of such an issue, please see here and ensure that they are met.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue
command.
In response to this:
/kind feature
Describe the solution you'd like Currently, the template generation code for
kibe-vip
based templates generates a ClusterResourceSet which installs the CPI/CSI on the workload cluster. The names of these resources created by the CRS are not dynamically generated. Hence if trying to reuse the same manifest template for different workload clusters results in the same secrets/config maps(used by the CRS) being updated. This could have unintended consequences when working with multiple clusters.Anything else you would like to add: It would be great if we could prefix the resources with the name of the cluster to enable reuse of the same templates for different workload clusters.
Environment:
- Cluster-api-provider-vsphere version:
tip of default branch
- Kubernetes version: (use
kubectl version
): n/a- OS (e.g. from
/etc/os-release
): n/a/help /good-first-issue
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/assign @scdubey
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
/assign
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten