klog
klog copied to clipboard
Allow klog configuration from somewhere other than flag
/kind feature
Describe the solution you'd like
Currently, the main way that you can configure klog is through the registering of flags with the standard flag package. This is useful for most cases, but it would be useful to provide another configuration interface. My idea would be to provide something like InitWithValues(init *klog.Configuration)
method where the config struct provides the same flag values, but through go instead.
Or at the very least, provide an external method SetVerbosity(in int)
as a means for configuring the verbosity in some way other than flags. This would be useful if you are using other flag libraries and allow you to configure klog with them as well and not being tied to using the standard flag library.
Let me know what you think.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
The problem with klog is that it is close to impossible to experiment with new APIs: as soon as they are in a release, the API has to adhere to the strict compatibility guarantee of a v2 Go module. Therefore I am not very enthusiastic about anything that will need such significant changes.
I am thinking it would just involve adding a public method that updates the value of settings.verbosity, probably just by calling Level.set(
Would a simple addition of a public API cause problems, even if no other methods are touched?
We would be bound to support that API as-is forever (well, within klog/v2, but we don't want to switch to a /v3). The API therefore becomes technical debt.
Ah understood.
I understand the reluctance to make this change, but the lack of configurability creates situations that are very difficult to deal with.
For example, I have an application that is dependent upon code in k8s.io/client-go/tools
, which uses klog/v2 to output logs. Something has gone wrong and I would like to be able to bump the v-level up to trace what's going on in the logs. However, doing so will require calling InitFlags
in the application. The application uses zap-logger as it's normal logging platform (which we tell klog to use via klog.SetLogger
). As such, we already expose flags for configuring log-levels and such related to that logger. In order to configure klog, I would need two sets of logging flags exposed to control different logging systems, which would be confusing even among developers, let alone end-users. So in order to add the v-level configuration functionality, I will want just use a monkey patch and avoid a production change. Or I might attempt something bizarre like figuring out how to manipulate the command-line so klog can parse pretend args that I've injected. (I'm not even sure that's possible, but it's the sort of bizarre approach I'm forced to consider because of how this works.)
The lack of configurability here, even if it's a tech debt burden for the future, is fairly problematic on its own. Any application using code the logs via klog that does not itself use klog as its primary logger is going to have problems. Any application that wants to configure logging by any means other than the command-line at start is also going to have problems.
I would need two sets of logging flags exposed to control different logging systems, which would be confusing even among developers, let alone end-users.
No, you don't :grin: You can create a flag set that is not the one from your program and use that to change the values:
var fs flags.FlagSet
klog.InitFlags(&fs)
err := fs.Set("v", "5")
I understand that this is perhaps not the API that you want, but at least it exists and exposes all options in a consistent way.
Here is one potential way forward:
- export string constants with the name of all command line flags: those are part of the API already, we would just document them in a way that apidiff would also pick up any future changes (which has some value by itself, although it is unlikely that we'll break this)
- create a "config" submodule (i.e. a directory with its own
go.mod
file): this would need to be tagged separately, but that's doable - add new API calls inside that submodule which call
flag.Set
for a local flag set on whichInitFlags
was called: this new API does not become part of "proper" klog and therefore this new API can still be changed before committing to it in a 1.0.0 release
@dims: would that be okay?
@pohly works for me!
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-lifecycle rotten /reopen /triage accepted
@pohly: Reopened this issue.
In response to this:
/remove-lifecycle rotten /reopen /triage accepted
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted
(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.