cluster-api-provider-vsphere
cluster-api-provider-vsphere copied to clipboard
Where can I find the credentials for the OVA images for CAPV?
/kind bug
What steps did you take and what happened: Note I am a newbee to vSphere, and CAPI.
I want to log into the machines being provisioned through CAPV to view files, environment variables, or change something and so on. But where do I find the exact user name and password in order to log into the machines?
I have tried Ubuntu XX.XX OVA files. Not being able to log into any of them!
The image runs fine, but I'm stuck on the login screen.
I've tried passwords like empty string, changeme
, root
, admin
, password
, builder
etc.. none worked so far.
Why is this so hard to find?
How does is the CAPV able perform actions on the machines being provisioned without knowing the user name or password?
What did you expect to happen: Being able to get creds or change them, either manually from the vSphere in protal/brower, or throught there would be a setting for CAPV during provisioning clusters.
Anything else you would like to add: I thought this would be trivial. But why is this so hard? K8s, CAPI and CAPV are all already so hard to use. Why is this complicated for trivial things?
Environment:
- Cluster-api-provider-vsphere version: 1.5.3
- Kubernetes version: (use
kubectl version
): 1.28.4 - OS (e.g. from
/etc/os-release
): Ubuntu 22.04
AFAIK there is no password set in the image provided by the community.
Instead I think you have two options:
- provide a public ssh key inside kubeadmcontrolplane or kubeadmconfigtemplate to let cloud-init allow you to ssh into the vm (default user is
capv
) - build your own image and set a password for a user.
Configuring users for manual action/debugging is described here at KubeadmConfig.Users
: https://main.cluster-api.sigs.k8s.io/tasks/bootstrap/kubeadm-bootstrap/index.html?highlight=ssh#additional-features .
A different way to enter an existing node is kubectl debug node
: https://kubernetes.io/docs/tasks/debug/debug-cluster/kubectl-node-debug/
It could even be used to get a root shell or place ssh authorized keys after a VM is provisioned.
Also: CAPI and CAPV currently works by treating VMs as immutable. So changes would get rolled out by rolling out new machines instead of reconfiguring existing machines. There's is a working group which works on in-place upgrades https://github.com/kubernetes-sigs/cluster-api/blob/main/docs/community/20231016-in-place-upgrades.md which is not yet completely defined or implemented in CAPI. All what CAPI and CAPV does gets currently done via passing cloud-init (or ignition) information via user-data which contains configuration and commands to run on the first boot of the machine.
Thank you @chrischdi. I will check these out and see if they work for me.
There's a script provided by image-builder project that is used to build CAPV OVAs, it may help you to leverage cloud-init
to login the VM with ssh key.
Please find details here: https://image-builder.sigs.k8s.io/capi/providers/vsphere#accessing-remote-vms
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/close
see information provided above and no further feedback
@sbueringer: Closing this issue.
In response to this:
/close
see information provided above and no further feedback
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.