logbook
logbook copied to clipboard
logbook-spring-webflux:Unable to compact body, is it a JSON?
First add logbook-spring-webflux in spring cloud gateway,when access backend application A via gateway,It occur "Unable to compact body, is it a JSON?"
Description
1、backend application A enable compression by "server.compression.enabled=true" in bootstrap.yml 2、access backend application A by gateway 3、It record response log in spring cloud gateway,but the response log is compressed,when parsing response log,It occur error "Unable to compact body, is it a JSON?"
Expected Behavior
It won't report error
Actual Behavior
"Unable to compact body, is it a JSON?"
Possible Fix
decide if the response log is compressed
Steps to Reproduce
1、backend application A enable compression by "server.compression.enabled=true" in bootstrap.yml 2、access backend application A by gateway 3、It record response log in spring cloud gateway,but the response log is compressed,when parsing response log,It occur error "Unable to compact body, is it a JSON?"
Context
Your Environment
- Version used: spring cloud: 2020.0.3 spring boot : 2.5.4 logbook: 2.14.0
Hello there is a simple solution to this issue:
@Configuration
public class LogbookConfiguration {
@Bean public Sink defaultSink(RequestResponseJsonFilter filter, HttpLogFormatter formatter, Set<MessageBodyDecoder> messageBodyDecoders) {
Map<String, MessageBodyDecoder> messageBodyDecodersByEncodingType = messageBodyDecoders.stream()
.collect(Collectors.toMap(MessageBodyDecoder::encodingType, identity()));
return new PathFilterSink(new LogstashLogbackSink(formatter), filter) {
@Override public void write(Correlation correlation, HttpRequest request, HttpResponse response) throws IOException {
if (isContentEncodingGzip(response) && isJson(response.getContentType())) {
super.write(correlation, request, new DecodingHttpResponse(response, messageBodyDecodersByEncodingType));
} else {
super.write(correlation, request, response);
}
}
private boolean isContentEncodingGzip(HttpResponse response) {
return Objects.equals(
HttpHeaderValues.GZIP.toString()
, response.getHeaders().getFirst(HttpHeaders.CONTENT_ENCODING));
}
};
}
@AllArgsConstructor private static class DecodingHttpResponse implements ForwardingHttpResponse {
private final HttpResponse response;
private final Map<String, MessageBodyDecoder> messageBodyDecodersByEncodingType;
@Override public HttpResponse delegate() {
return response;
}
@Override public byte[] getBody() throws IOException {
return decodeBody(unwrapResponse(response).getBody());
}
@Override public String getBodyAsString() throws IOException {
// TODO maybe/better use charset from response.getContentType ?
return new String(getBody(), response.getCharset());
}
private byte[] decodeBody(byte[] body) {
MessageBodyDecoder decoder = messageBodyDecodersByEncodingType.get(HttpHeaderValues.GZIP.toString());
return decoder.decode(body);
}
private HttpResponse unwrapResponse(HttpResponse response) {
HttpResponse result = response;
while (result instanceof ForwardingHttpResponse) {
ForwardingHttpResponse forwardingHttpResponse = (ForwardingHttpResponse) result;
result = forwardingHttpResponse.delegate();
}
return result;
}
}
@kkrzeminski Ok, thanks for your replay, I will try it later
@kkrzeminski I wasn’t able to use your approach due to some dependency issues. But shouldn’t there be an easier solution? According to the documentation https://github.com/zalando/logbook#known-issues the logbook interceptor should be last, but using the spring webflux autoconfiguration there are no interceptors but only filters to configure
“The Logbook HTTP Client integration is handling gzip-compressed response entities incorrectly if the interceptor runs before a decompressing interceptor. Since logging compressed contents is not really helpful it's advised to register the logbook interceptor as the last interceptor in the chain.”
Hi there, is this issue being resolved?
In order to prioritize the support for Logbook, we would like to check whether the old issues are still relevant. This issue has not been updated for over six months.
- Please check if it is still relevant in latest version of the Logbook.
- If so, please add a descriptive comment to keep the issue open.
- Otherwise, the issue will automatically be closed after a week.
This issue has automatically been closed due to no activities. If the issue still exists in the latest version of the Logbook, please feel free to re-open it.
Hello, is there any update on this issue?