cluster-api-provider-vsphere
cluster-api-provider-vsphere copied to clipboard
can we have more fine grained power state (i.e. a reason such as "powered on, waiting for IP") ?
/kind bug
- feel free to close this issue, it might not be actionable but i wanted to write it up...
- were seeing VMs on some hardware coming online as IPv6 vms instead of IPv4 ones.
- in the same cluster, were seeing these stuck in PoweringOn state
- lastTransitionTime: "2022-03-23T14:25:34Z"
message: 1 of 2 completed
reason: PoweringOn
severity: Info
status: "False"
type: Ready
Is it possible that, if a VM cant get an IPv4 address, it might be misinterpretted as being in the PoweringOn state eternally ? If so, can we have an "intermediate" state of "PoweredOnWaitingIp" or something ?
What steps did you take and what happened:
In the code for the reconcilePowerState , we say powering on...
case infrav1.VirtualMachinePowerStatePoweredOff:
ctx.Logger.Info("powering on")
task, err := ctx.Obj.PowerOn(ctx)
If the machine is in the Off state. We then mark the machine as "false" bc its "PoweringOn".
However, I feel like this might be misleading, bc we are currently seeing "Powered On" machines that are flagged by CAPI as "ready=False" , "reason=PoweringOn"...
Can we add more granularity to that log message, something like:
ctx.Logger.Info("Machine was powered on but has no IP... waiting for it to be fully networked...")
What did you expect to happen:
- More fine grained information about the state of powered off machines ...
- Is it easy to add a more meaningfull value to the condition_consts.go
- Have the reason for the reason, i.e., what do we really mean when we say it is in "PoweringOn" state?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen - Mark this issue or PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen- Mark this issue or PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten /lifecycle active /reopen
@srm09: Reopened this issue.
In response to this:
/remove-lifecycle rotten /lifecycle active /reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.