cluster-api-provider-nested
cluster-api-provider-nested copied to clipboard
log v5 doesn't print desired log of vn-agent
What steps did you take and what happened: [A clear and concise description on how to REPRODUCE the bug.]
What did you expect to happen:
$ kubectl describe pod vn-agent-45wmt -n vc-manager | grep Command -A 5
Command:
vn-agent
--cert-dir=/etc/vn-agent/
--kubelet-client-certificate=/etc/vn-agent/pki/client.crt
--kubelet-client-key=/etc/vn-agent/pki/client.key
--v=5
$ kubectl logs vn-agent-45wmt -n vc-manager
I0719 02:01:45.819355 1 cert.go:61] Using self-signed cert (/etc/vn-agent/vn.crt, /etc/vn-agent/vn.key)
I0719 02:01:45.819508 1 server.go:127] server listen on :10550
I0719 02:02:03.109097 1 route.go:135] will forward request to super apiserver
looks like v5 log doesn't work for detailed log
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
Environment:
- cluster-api-provider-nested version:
- Minikube/KIND version:
- Kubernetes version: (use
kubectl version): - OS (e.g. from
/etc/os-release):
/kind bug [One or more /area label. See https://github.com/kubernetes-sigs/cluster-api-provider-nested/labels?q=area for the list of labels]
/assign
Have you found more information about what is going on here?
not sure what happened , still checking
just notice vn-agent-xxx does not exist in capn created env :( so @christopherhein do you think it's still worthy to fix those issue given capn is future ?
Yes, cause in the capn future there is two paths, CAPN w/ VC and CAPN without. CAPN w/ VC is likely going to be most folks way of deploying the architecture.
ok, thanks for the info~
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale /lifecycle frozen
I actually could reproduce this issue. Neither -v nor --v would work. 😢 Have you found anything? @jichenjc
cc @charleszheng44 did you work on the vn-agent? it looks like klog isn't configured properly with verbosity levels…?
I will take a look.
/assign