kubeadm icon indicating copy to clipboard operation
kubeadm copied to clipboard

Define policy around klog.Warning usage in kubeadm

Open fabriziopandini opened this issue 5 years ago • 22 comments

In the kubeadm output there are some logs entry that should be fixed:

[init] Using Kubernetes version: v1.16.2
W1113 10:20:56.260581     589 validation.go:28] Cannot validate kubelet config - no validator is available
W1113 10:20:56.260638     589 validation.go:28] Cannot validate kube-proxy config - no validator is available
[preflight] Running pre-flight checks

[control-plane] Creating static Pod manifest for "kube-apiserver"
W1113 10:29:17.627822    1065 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W1113 10:29:17.633914    1065 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W1113 10:29:17.635821    1065 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[

W1113 10:28:15.286513    1065 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks

Logs should be linked to a log level or converted into fmt.Printf similar to other outputs

/cc @neolit123 @rosti

fabriziopandini avatar Nov 13 '19 12:11 fabriziopandini

/assign

SataQiu avatar Nov 14 '19 08:11 SataQiu

Logs should be linked to a log level or converted into fmt.Printf similar to other outputs

In klog, log levels exist and are applicable only to the "info" severity. Hence I presume, that you want to narrow down klog usage to the info severity and remove errors and warnings completely. Is that the case?

I do agree, that we need a more clearly defined policy in the use of klog and printfs. We have to take into account, that kubeadm is used by automated tools and end users alike. Swinging into one direction is going to hamper one of the user groups.

rosti avatar Nov 14 '19 09:11 rosti

klog.Error and klog.Warning are parts of the klog logger and are used widely in k8s.

if kubeadm decides to not use anything but klog.V(x).Info that is fine, and it has the freedom to do so. but my suggestion is to do that in one PR that swipes them all.

changed the title to reflect that we are having a discussion.

also noting that users that are annoyed by klog output can always pipe stderr to /dev/null.

but to expose the wider problem and to be completely fair, our mixture of stdout (printf) and stderr (klog) is messy.

  • ideally kubeadm should stop mixing printf and klog.
  • all output should be printed using the same logger
  • the logger backend should be abstacted and klog should not be imported per file.
  • all output should go to the same stream.
  • klog should start supporting omitting the "line info" W1113 10:29:17.635821 1065 manifests.go:214] (forgot what this is called in the klog source)
  • we can disable the "line info" by default and have a flag to enable it

neolit123 avatar Nov 14 '19 14:11 neolit123

@SataQiu looks like you sent https://github.com/kubernetes/kubernetes/pull/85382 but we haven't decided how to proceed yet. :)

neolit123 avatar Nov 16 '19 18:11 neolit123

Yes @neolit123 Just have a try!

SataQiu avatar Nov 17 '19 04:11 SataQiu

Just wondering would it make sense to use cmd/kubeadm/app/util/output API to solve this? It would also help to unify output and implement structured output.

bart0sh avatar Nov 18 '19 12:11 bart0sh

@bart0sh i'm +1 to use any unified backend. but there are some decisions to make regarding stdout vs stderr and whether we want to continue using klog.

neolit123 avatar Nov 18 '19 15:11 neolit123

i'm going to investigate:

klog should start supporting omitting the "line info" W1113 10:29:17.635821 1065 manifests.go:214]

neolit123 avatar Nov 18 '19 15:11 neolit123

Considering that this is a warning and not an error, it should show up in a log that is showing the correct log level, not on stderr.

ejmarten avatar Dec 31 '19 18:12 ejmarten

Here's a quick fix to folks being thrown off by this behaviour in their automation scripts: redirect stderr to /dev/null (or elsewhere).

For example, if you wanted the join command, you'd do this

kubeadm token create --print-join-command 2>/dev/null

polarapfel avatar Jan 10 '20 00:01 polarapfel

Try kubeadm reset, and then try again. The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar) to reset your system's IPVS tables.

GissellaSantacruz avatar Feb 18 '20 17:02 GissellaSantacruz

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Jun 06 '20 17:06 fejta-bot

/remove-lifecycle stale

neolit123 avatar Jun 07 '20 18:06 neolit123

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot avatar Oct 25 '20 17:10 fejta-bot

/remove-lifecycle stale

neolit123 avatar Oct 25 '20 19:10 neolit123

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Mar 02 '21 16:03 fejta-bot

/remove-lifecycle stale

neolit123 avatar Mar 02 '21 17:03 neolit123

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale

fejta-bot avatar Jun 07 '21 15:06 fejta-bot

/remove-lifecycle stale

neolit123 avatar Jul 26 '21 18:07 neolit123

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 24 '21 18:10 k8s-triage-robot

/remove-lifecycle stale

neolit123 avatar Oct 24 '21 20:10 neolit123

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Feb 21 '22 18:02 k8s-triage-robot