No date field error when log entry has to many rows
Some of my logs contain stacktraces as long as 65 rows. When this happen in a logfile, log-viewer cannot correctly identify the next log entry. It returns a No date field, log cannot be merged error. When I remove the log entries in the log it returns to function correctly. I assume this is due to a maximum number of rows an entry is allowed to take. Maybe an option to add this as a config variable to set?
Example:
2022-01-19 14:41:44,459 DEBUG { } http-nio-XXX-7070-exec-9 [XXX.DefaultTokenService]: Fout bij verifieren token. java.security.SignatureException: token expired at X, now is 202X44.459Z at net.oauth.jsontoken.JsonTokenParser.verifyAndDeserialize(JsonTokenParser.java:132) ~[jsontoken-1.0.jar!/:?] ..... 63 rows 2022-01-19 14:41:44,461 ERROR { } X [XTokenRestController]: Foutcode: X
Thanks, and I like your tool!
There is no limit for rows in an entity. Should work fine. could you attach a log file? I cannot reproduce.
The reason is probably due to line 2 not having a date.
2022-01-17 09:44:43,985 DEBUG { } httX-7070-exec-3 [XDefaultTokenService]: Fout bij verifieren token.
java.security.SignatureException: token expired at 2022-01-15T03:18:49.000Z, now is 2022-01-17T08:44:43.985Z
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-8.5.43.jar!/:8.5.43]
at java.lang.Thread.run(Thread.java:745) [?:1.7.0_291]
2022-01-17 09:44:43,988 ERROR { } htX70-exec-3 [XTokenRestController]: Foutcode: FOUT_BIJ_VERIFIEREN_TOKEN
Weird, because the following works:
2022-01-13 10:10:26,744 DEBUG { } http-nio-X-exec-5 [XDefaultTokenService]: Fout bij verifieren token.
java.security.SignatureException: Signature verification failed for issuer: X
at net.oauth.jsontoken.JsonTokenParser.verifyAndDeserialize(JsonTokenParser.java:126) ~[jsontoken-1.0.jar!/:?]
The format detector ignores lines started with " at", so there are 2 lines with a date and one line without a date (java.security.SignatureException: token ...). The format detector decided that the format is correct if the recognized line number is more than not recognized lines * 2/3 .
You have to specify log format manually in "config.conf". Add lines like the following into logs=[ .. ] section:
{
path: ${HOME}"/my-app/logs/*.log"
format: {
type: LogbackLogFormat
pattern: "%d{yyyy-MM-dd HH:mm:ss,SSS} %level %m%n"
}
}
That explains it, thank you. I do not use the provided UI to browse through the system files but I only use the log-paths = {} section to point to logs on machines. Is it possible to specify the log formats for this approach?
yes, logs=[ .. ] section is used for format detection and checking accessibility even you open a log using a shortcut in log-paths = {}. When you are opening a shortcut, the system resolves the shortcut to the real file path, then searches for the path in logs=[ .. ]
I got it working up to a point but not yet how I want.
For example take the following log row:
2022-01-19 17:20:53,917 INFO { } main [org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer]: Tomcat started on port(s): 7070 (http)
It correctly detects the log with the following config:
pattern: "%d{yyyy-MM-dd HH:mm:ss,SSS}%m%n"
But not this:
pattern: "%d{yyyy-MM-dd HH:mm:ss,SSS} %level %m%n"
The problem seems to be with the subseconds SSS, with auto format detector it does capture the subseconds but not with a manual pattern. My working filter results in the detection of the time and merging of the logs but not for the subseconds or loglevel(DEBUG, INFO, etc..). For your info, the pattern specified in log4j.xml for the logs, which does not work in log-viewer, is:
pattern: "%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p {%X{username} %X{client} %X{location}} %t [%c]: %m%n"
I tried to open the provided line. 2022-01-19 17:20:53,917 INFO { } main [org.springframework.boot.context.embedded.tomcat.TomcatEmbeddedServletContainer]: Tomcat started on port(s): 7070 (http)
It was detected as %d{yyyy-MM-dd HH:mm:ss,SSS} %level %m%n and shown correctly.

pattern: "%d{yyyy-MM-dd HH:mm:ss,SSS} %-5p {%X{username} %X{client} %X{location}} %t [%c]: %m%n"
This pattern really doesn't work. There is a bug in parsing {%X{username} %X{client} %X{location}}. As a workaround, you can add commas between the fields: {%X{username},%X{client},%X{location}}
1.0.3 release contains a fix for parsing using %d{yyyy-MM-dd HH:mm:ss,SSS} %-5p {%X{username} %X{client} %X{location}} %t [%c]: %m%n pattern
I will update my version and configuration and test the new release
My automated script fails due to a new versioning format: https://github.com/sevdokimov/log-viewer/releases/download/v1.0.1/log-viewer-1.0.1.tar.gz https://github.com/sevdokimov/log-viewer/releases/download/v.1.0.3/log-viewer-1.0.3.tar.gz
I run your tool on a few servers configured with Ansible for deployment/configuration
I have tested version 1.0.3 and it looks great! The auto format detection seems to work now :)