cluster-api-provider-vsphere
cluster-api-provider-vsphere copied to clipboard
Log errors when verifying vCenter session validity
Describe the solution you'd like Expose/log the error while creating/fetching sessions back from vCenter.
Anything else you would like to add: Currently, CAPV swallows the error from vCenter while verifying the validity of existing sessions in the session cache. This problem was masking a Permissions issue in a specific usecase. Rather than swallowing the error, logging it would further help debugging such issues.
Environment:
- Cluster-api-provider-vsphere version: v0.3.16
- Kubernetes version: (use
kubectl version
): n/a - OS (e.g. from
/etc/os-release
): n/a
/kind feature
/good-first-issue /help-wanted
@srm09: This request has been marked as suitable for new contributors.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue
command.
In response to this:
/good-first-issue /help-wanted
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale /assign
@AartiJivrajani Were you able to make any progress on this one? Lemme know if you need help with this.
I haven't gotten around to working on this unfortunately! I will take a look at this later this week.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
/lifecycle active
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@aartij17 If you have not been able to look into this one, maybe we could add this back to the list of help wanted
issues.
/remove-lifecycle stale
/help-wanted /unassign
If this issue is all about this exception swallowing then this piece of code has been heavily refactored ever since in commit and the problem is not relevant any more. Except the case when someone uses as outdated as release-0.3
.
@srm09
is this issue about backporting error check into release-0.3
branch or is it just stale?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
@zhanggbj When you have some time, can you please check if this issue still makes sense?
Currently for cached session, capv will do following checks and report errors if any, so I assume this issue is no longer existed. https://github.com/kubernetes-sigs/cluster-api-provider-vsphere/blob/main/pkg/session/session.go#L131-L142
@srm09 WDYT?
Thx @zhanggbj for checking. I also checked the entire session.go file. All errors are handled now.
Let's please link to code if we create issues like this in the future, it would save us a lot of time.
/close
@sbueringer: Closing this issue.
In response to this:
Thx @zhanggbj for checking. I also checked the entire session.go file. All errors are handled now.
Let's please link to code if we create issues like this in the future, this safes us a lot of time.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.