kubeadm
kubeadm copied to clipboard
Define policy around klog.Warning usage in kubeadm
In the kubeadm output there are some logs entry that should be fixed:
[init] Using Kubernetes version: v1.16.2
W1113 10:20:56.260581 589 validation.go:28] Cannot validate kubelet config - no validator is available
W1113 10:20:56.260638 589 validation.go:28] Cannot validate kube-proxy config - no validator is available
[preflight] Running pre-flight checks
[control-plane] Creating static Pod manifest for "kube-apiserver"
W1113 10:29:17.627822 1065 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1113 10:29:17.633914 1065 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1113 10:29:17.635821 1065 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[
W1113 10:28:15.286513 1065 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
Logs should be linked to a log level or converted into fmt.Printf similar to other outputs
/cc @neolit123 @rosti
/assign
Logs should be linked to a log level or converted into fmt.Printf similar to other outputs
In klog, log levels exist and are applicable only to the "info" severity. Hence I presume, that you want to narrow down klog usage to the info severity and remove errors and warnings completely. Is that the case?
I do agree, that we need a more clearly defined policy in the use of klog and printfs. We have to take into account, that kubeadm is used by automated tools and end users alike. Swinging into one direction is going to hamper one of the user groups.
klog.Error and klog.Warning are parts of the klog logger and are used widely in k8s.
if kubeadm decides to not use anything but klog.V(x).Info that is fine, and it has the freedom to do so. but my suggestion is to do that in one PR that swipes them all.
changed the title to reflect that we are having a discussion.
also noting that users that are annoyed by klog output can always pipe stderr to /dev/null.
but to expose the wider problem and to be completely fair, our mixture of stdout (printf) and stderr (klog) is messy.
- ideally kubeadm should stop mixing printf and klog.
- all output should be printed using the same logger
- the logger backend should be abstacted and klog should not be imported per file.
- all output should go to the same stream.
- klog should start supporting omitting the "line info"
W1113 10:29:17.635821 1065 manifests.go:214](forgot what this is called in the klog source) - we can disable the "line info" by default and have a flag to enable it
@SataQiu looks like you sent https://github.com/kubernetes/kubernetes/pull/85382 but we haven't decided how to proceed yet. :)
Yes @neolit123 Just have a try!
Just wondering would it make sense to use cmd/kubeadm/app/util/output API to solve this? It would also help to unify output and implement structured output.
@bart0sh i'm +1 to use any unified backend. but there are some decisions to make regarding stdout vs stderr and whether we want to continue using klog.
i'm going to investigate:
klog should start supporting omitting the "line info" W1113 10:29:17.635821 1065 manifests.go:214]
Considering that this is a warning and not an error, it should show up in a log that is showing the correct log level, not on stderr.
Here's a quick fix to folks being thrown off by this behaviour in their automation scripts: redirect stderr to /dev/null (or elsewhere).
For example, if you wanted the join command, you'd do this
kubeadm token create --print-join-command 2>/dev/null
Try kubeadm reset, and then try again. The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale