InfrastructureProvider do not pass configSecret
What steps did you take and what happened: [A clear and concise description on how to REPRODUCE the bug.]
apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: InfrastructureProvider
metadata:
name: vsphere
namespace: capi-system
spec:
version: v1.10.0
configSecret:
name: vsphere-secret
apiVersion: v1
data:
VSPHERE_PASSWORD: XXXXXX
VSPHERE_USERNAME: XXXX
kind: Secret
Once provider is installed, resulting bootstrap-secret do not contain provided credentials
apiVersion: v1
data:
credentials.yaml: dXNlcm5hbWU6ICcnCnBhc3N3b3JkOiAnJw==
kind: Secret
metadata:
labels:
cluster.x-k8s.io/provider: infrastructure-vsphere
clusterctl.cluster.x-k8s.io: ""
name: capv-manager-bootstrap-credentials
namespace: capi-system
ownerReferences:
- apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: InfrastructureProvider
name: vsphere
uid: f94a5b43-e7e6-4def-9540-c3d8c178083f
type: Opaque
Which is equal to:
username: ''
password: ''%
What did you expect to happen: bootstrap-credentials to contain secret data from spec.configSecret
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- Cluster-api-operator version: v0.12.0
- Cluster-api version: v1.8.0
- Minikube/KIND version: --
- Kubernetes version: (use
kubectl version): - Client Version: v1.30.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.29.4+rke2r1
- OS (e.g. from
/etc/os-release):
/kind bug [One or more /area label. See https://github.com/kubernetes-sigs/cluster-api-operator/labels?q=area for the list of labels]
This issue is currently awaiting triage.
If CAPI Operator contributors determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.
The triage/accepted label can be added by org members by writing /triage accepted in a comment.
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.
I did run clusterctl generate and see secret is created properly
apiVersion: v1
kind: Secret
metadata:
labels:
cluster.x-k8s.io/provider: infrastructure-vsphere
clusterctl.cluster.x-k8s.io: ""
name: capv-manager-bootstrap-credentials
namespace: capv-system
stringData:
credentials.yaml: |-
username: 'foo'
password: 'bar'
type: Opaque
@k0da can you check the secret namespace. The empty one is in capi-system:
name: capv-manager-bootstrap-credentials
namespace: capi-system
And generated with clusterctl generate is from capv-system:
name: capv-manager-bootstrap-credentials
namespace: capv-system
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.