client-go
client-go copied to clipboard
Fake client update bind sub resource result in panic
This should be the same problem as #911. For a given Pod
resource, when the fake client creates the bind
resource, sub-sequent get() call results in the panic. I'm wondering if this is the expected behavior. If not, I would like to know how to modify the logic here, considering that the code for fake_pod.go
is generated by client-gen
.
The client-go version is v0.20.5. The detailed error message is as follows. Any help would be much appreciated!
E0328 14:33:47.968795 5240 reflector.go:477] k8s.io/client-go/informers/factory.go:134: expected type *v1.Pod, but watch event object had type *v1.Binding
panic: interface conversion: runtime.Object is *v1.Binding, not *v1.Pod
goroutine 1 [running]:
k8s.io/client-go/kubernetes/typed/core/v1/fake.(*FakePods).Get(0x1400099a000, {0x105c91088, 0x140008968c0}, {0x14000b06240, 0xa}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x0, ...}})
/Users/mental/src/github.com/mental2008/open-simulator/vendor/k8s.io/client-go/kubernetes/typed/core/v1/fake/fake_pod.go:51 +0x1bc
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closing this issue.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue or PR with
/reopen
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.