klog
klog copied to clipboard
Repeated trace in log file with error
/kind bug
What steps did you take and what happened: https://play.golang.com/p/6l8KNG_iOsM
Configuring klog to write to a file and logtostderr=false, the log file is openned and wrote three times with the same content.
Removing the flush, writes three times the headers, but no content.
What did you expect to happen: Only one open and write.
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
/priority important-soon
https://github.com/kubernetes/klog/blob/master/klog.go#L926
// Files are created in decreasing severity order, so as soon as we find one
// has already been created, we can stop.
for s := sev; s >= infoLog && l.file[s] == nil; s-- {
This is going to create a new file for each severity. Opening three times (error, warning and info) the same file if we have defined a particular log_file.
Not sure why the comment says that it is going to stop.
it will stop if the files for a particular verbosity already exists (e.g. l.file[s] != nil) -- it just doesn't have a good definition of "already created". We should probably fix this somewhere though.
It's also configured to write to multiple files: https://github.com/kubernetes/klog/blob/e88f7305ac70285fef29b8acebb00ba53136eab4/klog.go#L765-L777
/reopen
the attempted fix for this caused severe performance regressions
@liggitt: Reopened this issue.
In response to this:
/reopen
the attempted fix for this caused severe performance regressions
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
fix was reverted in https://github.com/kubernetes/klog/pull/69
When a new fix is developed for this issue, it must be accompanied by (or preceded by) benchmark tests that exercise the cases that regressed in https://github.com/kubernetes/kubernetes/pull/78465
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Any update on this? /remove-lifecycle stale /lifecycle frozen