cluster-api-provider-nested
cluster-api-provider-nested copied to clipboard
log format inconsistent between components
cloudusr@jitest19:~$ kubectl logs vn-agent-p6gqv -n vc-manager
I0711 10:26:39.325504 1 cert.go:61] Using self-signed cert (/etc/vn-agent/vn.crt, /etc/vn-agent/vn.key)
I0711 10:26:39.325587 1 server.go:127] server listen on :10550
cloudusr@jitest19:~$
cloudusr@jitest19:~$ kubectl logs vc-manager-76c5878465-bwq2t -n vc-manager
{"level":"info","ts":1625999191.3924062,"logger":"entrypoint","msg":"setting up client for manager"}
{"level":"info","ts":1625999191.393097,"logger":"entrypoint","msg":"setting up manager"}
{"level":"info","ts":1625999192.0039604,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":":0"}
{"level":"info","ts":1625999192.0044432,"logger":"entrypoint","msg":"Registering Components."}
{"level":"info","ts":1625999192.0044632,"logger":"entrypoint","msg":"setting up scheme"}
{"level":"info","ts":1625999192.0047922,"logger":"entrypoint","msg":"Setting up controller"}
{"level":"info","ts":1625999192.0049727,"logger":"entrypoint","msg":"setting up webhooks"}
{"level":"info","ts":1625999192.045926,"logger":"virtualcluster-webhook","msg":"successfully created service/virtualcluster-webhook-service"}
{"level":"info","ts":1625999198.7617276,"logger":"virtualcluster-webhook","msg":"successfully generate certificate and key file"}
{"level":"info","ts":1625999198.7618213,"logger":"virtualcluster-webhook","msg":"will create validatingwebhookconfiguration/virtua
User Story
As a [developer/user/operator] I would like to [high level description] for [reasons]
Detailed Description
[A clear and concise description of what you want to happen.]
Anything else you would like to add:
[Miscellaneous information that will assist in solving the issue.]
/kind feature
/assign
not a big deal ,but to be consistent seems better
Definitely agree, consistency would be great!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale /lifecycle frozen