cryostat-legacy
cryostat-legacy copied to clipboard
feat(reports): implement json-output automated report feature
Fixes #880 Depends on #1016
Thoughts? I made a new class called EvalMapService which only purpose is to serve the json-output through the ReportService and it uses a lot of copy paste methods from the archived and active caches.
As a general high-level review note, I'd prefer to avoid adding a class like the EvalReportService
. It'd be much better to refactor, generalize, abstract, etc. the existing classes as needed to extend the capabilities for HTML vs JSON formatted report responses, rather than copy-paste duplicating classes that only differ on a -core
method call line or something, or differ only by passing in a different HtmlMimeType
argument.
Ignoring tests and v2 handlers, how does it look now? It's pretty much just adding another parameter to all the methods that have to be used and it all works fine from testing using the command line.
ReportGetHandler
`$ curl localhost:8181/api/v1/reports/org-codehaus-plexus-classworlds-launcher-Launcher_foo_20220627T211824Z.jfr -H "Authorization: Basic dXNlcjpwYXNzCg==" -H "Accept: application/json" \
--output test.json && firefox test.json`
TargetReportGetHandler (with filter=heap)
`$ curl localhost:8181/api/v1/targets/localhost/reports/foo?filter=heap --header "Authorization: Basic dXNlcjpwYXNzCg==" -H "Accept: application/json" \
--output w.json && firefox w.json`
One thing that I'm worried about is that some Result descriptions included HTML formatted data in them like
"Allocations.class":{
"score":0.2794996037213864,
"name":"Allocated Classes",
"topic":"heap",
"description":"The most allocated class is likely 'byte[]'. This is the most common allocation path for that class: <ul><li>Arrays.copyOf(byte[], int) (40 %)</li></ul>"
}
Not sure what to do in these cases since the results are generated by the SimpleResultProvider which is forked from JMC's JfrHtmlRulesReport and RulesToolkit classes.
There isn't much to do about that right now. Maybe later we can implement something using JSoup to strip out that formatting, or else on the frontend when we consume this data we actually use that formatting in the rendered view.
Looking good
Alright, then I will fix the issues, the v2 handlers, and the tests then.
Should I be using ApiException instead of HttpException for HTTP 406 in the V2 handlers? I noticed that they were implemented, but I'm not sure what the difference is.
ApiException
s get turned into HttpException
s down the line anyway: https://github.com/cryostatio/cryostat/blob/d248233983e6e50f705de46326e462ab44009b43/src/main/java/io/cryostat/net/web/WebServer.java#L129
So it isn't really a big deal either way. The convention up to this point has been that V2+ uses ApiException
, simply because that's our own class and we're a little more free to change how it works and how we handle it. HttpException
is provided by Vert.x and so we don't have so much flexibility with it. Originally in v1 we used Vert.x's HttpStatusException
and the lack of flexibility led to ApiException
being created, and then a Vert.x version upgrade changed HttpStatusException
into HttpException
and put it into a different package :-)
Should I be supporting multiple Accept headers like Accept: text/html;application/json;*/*
? I was originally going to but I realized since this is going to be an internally used endpoint, there wouldn't be much reason to if we just document that you must have at most 1 Accept header to each request.
How it is right now, if there are mutliple, 406 is thrown since we are looking at the raw acceptHeader.
And I notice that when I press "View Report" on an ActiveRecording, it will send add an AcceptHeader like
text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
because I'm guessing the default behaviour of Accept headers in the downloadReport
function in the cryostat-web Api.service.
What should happen here?
- Support multiple headers using
ctx.parsedHeaders().accept()
and do a bunch of manual parsing by checking q values and having an implicit order #(text/html always priority over application/json (or something))` (I don't think vert.x has a way of doing this easily) - Change the way the downloadReport function works with headers only.
- something else...?
Also seems that integration tests fail because of HTTP 406 as well, like here
Also seems that integration tests fail because of HTTP 406 as well, like here
Makes sense. Integration tests could need updating with a change like this.
Should I be supporting multiple Accept headers like
Accept: text/html;application/json;*/*
? I was originally going to but I realized since this is going to be an internally used endpoint, there wouldn't be much reason to if we just document that you must have at most 1 Accept header to each request.How it is right now, if there are mutliple, 406 is thrown since we are looking at the raw acceptHeader.
And I notice that when I press "View Report" on an ActiveRecording, it will send add an AcceptHeader like
text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
because I'm guessing the default behaviour of Accept headers in thedownloadReport
function in the cryostat-web Api.service.What should happen here?
1. Support multiple headers using `ctx.parsedHeaders().accept()` and do a bunch of manual parsing by checking q values and having an implicit order #(text/html always priority over application/json (or something))` (I don't think vert.x has a way of doing this easily) 2. Change the way the downloadReport function works with headers only. 3. something else...?
I think Vert.x does support this:
https://vertx.io/docs/vertx-web/java/#_routing_based_on_mime_types_acceptable_by_the_client
but we would need to set up the router in WebServer
to use .produces()
on route definitions, so that's out of scope for this particular PR. Ideally we do fully support this, but that's a larger change to make later on.
I'd rather not have requests to this endpoint outright fail if multiple Accept
s are given, or if Accept
has multiple listed content types. What do you think of the following behaviour?
- Check the
Accept
header(s) - If there is exactly one, and its value is a known supported type, respond with that
- In any other scenario, respond with HTML
This way if the client didn't provide an Accept
, or provided an invalid value, or provided multiple, or provided the weighted list, we just respond with a default HTML formatted doc (for now). This suits our own frontend's needs if we can set the Accept
header there manually/explicitly to only HTML or only JSON, and also allows some reasonable degree of flexibility as a public API with particular constraints. If we have time to come back around and properly support content type routing then we can fix this endpoint up to fully support the content type negotiation semantics and Accept
header values.
Sounds good, so then if they don't provide any valid accept headers, should we still respond with HTML, or still 406?
I'll leave that up to you. I would prefer the 406 for now, and then the proper .produces()
content-routing later will properly implement the negotiation and possibly change that behaviour again. Hopefully there's time to get that done before release so that this endpoint's behaviour isn't changing between released versions.
I'm not sure why I am getting null pointer exceptions in my tests e.g.
@Test
void shouldRespond406IfAcceptInvalid() throws Exception {
when(authManager.validateHttpHeader(Mockito.any(), Mockito.any()))
.thenReturn(CompletableFuture.completedFuture(true));
when(ctx.request()).thenReturn(req);
when(ahp.parse(Mockito.any())).thenReturn(List.of("unacceptable"));
when(ctx.response()).thenReturn(resp);
when(resp.putHeader(Mockito.any(CharSequence.class), Mockito.any(CharSequence.class)))
.thenReturn(resp);
HttpException ex =
Assertions.assertThrows(HttpException.class, () -> handler.handle(ctx));
MatcherAssert.assertThat(ex.getStatusCode(), Matchers.equalTo(406));
}
errors will happen like
shouldHandleRecordingDownloadRequest Time elapsed: 0.025 s <<< ERROR!
io.vertx.ext.web.handler.HttpException: Internal Server Error
Caused by: java.lang.NullPointerException: Cannot invoke "io.vertx.ext.web.ParsedHeaderValues.accept()" because the return value of "io.vertx.ext.web.RoutingContext.parsedHeaders()" is null
I created a new class to make it easier to test the header parsing, but I'm not sure why it doesn't work. I understand that the actual AcceptHeaderParser object is being created in the real handler and not here in the test, but I'm wondering why mocking the ctx, is okay when the same thing is happening with that as well.
I've also tried with
when(ahp.parse(Mockito.any(RoutingContext.class))).thenReturn(List.of("unacceptable"));
instead of
when(ahp.parse(Mockito.any())).thenReturn(List.of("unacceptable"));
but neither works.
Caused by: java.lang.NullPointerException: Cannot invoke "io.vertx.ext.web.ParsedHeaderValues.accept()" because the return value of "io.vertx.ext.web.RoutingContext.parsedHeaders()" is null
I think this is saying that ctx.parsedHeaders()
returning null is causing the chained .accept()
call to NPE.
Looking at your tests, I don't see anything like a Mockito.when(ctx.parsedHeaders()).thenReturn()
, which I think you'll need.
I've also tried with when(ahp.parse(Mockito.any(RoutingContext.class))).thenReturn(List.of("unacceptable")); instead of when(ahp.parse(Mockito.any())).thenReturn(List.of("unacceptable")); but neither works.
This doesn't work because the handler implementation is using a directly instantiated header parser, not an injected one, so your mock one isn't actually being used.
I understand that the actual AcceptHeaderParser object is being created in the real handler and not here in the test, but I'm wondering why mocking the ctx, is okay when the same thing is happening with that as well.
Yes, the ctx
is a mock, but it gets passed directly into the method implementation under test, which is how it ends up interacting with the real implementation's code. The ahp
isn't passed into the handler constructor or a method call, so it doesn't become part of the system under test.
Caused by: java.lang.NullPointerException: Cannot invoke "io.vertx.ext.web.ParsedHeaderValues.accept()" because the return value of "io.vertx.ext.web.RoutingContext.parsedHeaders()" is null
I think this is saying that
ctx.parsedHeaders()
returning null is causing the chained.accept()
call to NPE.Looking at your tests, I don't see anything like a
Mockito.when(ctx.parsedHeaders()).thenReturn()
, which I think you'll need.
So I guess there is no use in creating a new class for this then. I was trying to avoid mocking ParsedHeaderValues because I don't know how to instantiate it, but I'll try digging more.
I've also tried with when(ahp.parse(Mockito.any(RoutingContext.class))).thenReturn(List.of("unacceptable")); instead of when(ahp.parse(Mockito.any())).thenReturn(List.of("unacceptable")); but neither works.
This doesn't work because the handler implementation is using a directly instantiated header parser, not an injected one, so your mock one isn't actually being used.
Okay, I see now gotcha, thanks!
ParsedHeaderValues phv = Mockito.mock(ParsedHeaderValues.class);
Mockito.when(ctx.parsedHeaders()).thenReturn(phv);
Mockito.when(phv.accept()).thenReturn(List.of(something, orOther));
I was trying to avoid mocking ParsedHeaderValues because I don't know how to instantiate it, but I'll try digging more.
That's part of the beauty of mocks - you don't need to know how to instantiate the real thing :-)
I just realized there's a function getAll() where I can call ctx.request.headers().getAll(HttpHeaders.ACCEPT)
:(
Yep. The return type of .headers()
is a MultiMap.
We use that in various places already, ex.: https://github.com/cryostatio/cryostat/blob/main/src/test/java/io/cryostat/net/web/http/api/v2/AuthPostHandlerTest.java
I think you could even use .headers().contains("Accept", "text/html", true)
as a replacement for the new "parser" class.
Wait, the getAll()
doesn't actually parse the header into a List of strings, it just makes a List of one String which looks like text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
I guess I will just use String.split
to make it easier so I don't have to change tests too much.
What about something like:
ParsedHeaderValues phv = ctx.parsedHeaders();
List<MIMEHeader> accept = phv.accept();
boolean returnHtml = accept.stream().anyMatch(header -> header.component().equals("text") && header.subComponent().equals("html"))
If the .contains("Accept", "text/html", true)
works then I think that's better, but I'm not sure if the values there are already parsed and split up for you or if it'll still be the full text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
string.
If the
.contains("Accept", "text/html", true)
works then I think that's better, but I'm not sure if the values there are already parsed and split up for you or if it'll still be the fulltext/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8
string.
They are not split for me unfortunately.
What about something like:
ParsedHeaderValues phv = ctx.parsedHeaders(); List<MIMEHeader> accept = phv.accept(); boolean returnHtml = accept.stream().anyMatch(header -> header.component().equals("text") && header.subComponent().equals("html"))
Looks nice, what if the header is something like text/*
which is valid. Would it make sense to do like
boolean returnHtml = accept.stream().anyMatch(header -> header.component().equals("text"));
or
header -> header.component().equals("text") && (header.subComponent().equals("html") || header.subComponent().equals("*"));)
?
In this first case, it might not be good if someone has a header like "text/someRandomInvalidFormat"
header -> header.component().equals("text") && (header.subComponent().equals("html") || header.subComponent().equals("*"));)
Seems like that's encoding the expected behaviour.
I've added the ReportGetAcceptHeaderParser as a temporary class until #1016 is resolved.
Also, I'm unsure of how to fix the ITs. I've tried just putting headers like boolean returnHtml = ReportGetAcceptHeaderParser.isAcceptable(ctx, apiVersion());
// Get a report for the above recording
CompletableFuture<Buffer> getResponse = new CompletableFuture<>();
webClient
.get(String.format("%s/%s", REPORT_REQ_URL, savedRecordingName))
.putHeader(HttpHeaders.ACCEPT.toString(), HttpMimeType.HTML.mime()) // added line
.send(
ar -> {
if (assertRequestStatus(ar, getResponse)) {
MatcherAssert.assertThat(
ar.result().statusCode(), Matchers.equalTo(200));
MatcherAssert.assertThat(
ar.result()
.getHeader(HttpHeaders.CONTENT_TYPE.toString()),
Matchers.equalTo(HttpMimeType.HTML.mime()));
getResponse.complete(ar.result().bodyAsBuffer());
}
});
But they just seem to change the 406 error to a mysterious 500 error instead.
ReportIT
TargetReportIT
ArchivedReportJwtDownloadIT
RecordingWorkflowIT
and ReportJwtDownloadIT
are affected.
500
generally indicates that something is broken and throwing an uncaught exception within Cryostat itself, not in the test process. You can do podman logs -f cryostat-itest
to watch the log output from the Cryostat integration test container live as the tests execute (try the repeated-integration-tests.bash
script), or after a test run completes you can find the log files at ex. cryostat-itests-*.log
. There will be two log files per test run in there, one for the Cryostat server container and one for the test runner client process. Hopefully, you'll be able to see a stack trace in the server logs indicating what went wrong.
You can also see both sides of the log output in a CI run, ex. https://github.com/cryostatio/cryostat/runs/7205526045?check_suite_focus=true
The output under Run mvn -B -U clean verify
is the runner client process output. Under Print itest logs
you have the server container output.
And, to keep things easier to understand, you might want to try running the tests locally using repeated-integration-tests.bash
to run only one test suite. This way you can more clearly see the relation between client and server logs, since they're processing and responding to only a small handful of test methods.
For some reason when I try to run the ./repeated-integration-tests.bash
, the server.log gives
Error: container cryostat-itest is not in pod cryostat-itests: no such container
I should be starting my own container of that name?
EDIT: It's some problem on my side with my repo because it works on my main branch.
EDIT2: Actually doesn't work has a different error.
running bash ./repeated-integration-tests.bash
server.log:
+------------------------------------------+
| Thu Jul 7 14:03:50 UTC 2022 |
| |
| /opt/cryostat.d/truststore.d is empty; no certificates to import |
+------------------------------------------+
+------------------------------------------+
| Thu Jul 7 14:03:50 UTC 2022 |
| |
| JMX Auth Disabled |
+------------------------------------------+
+------------------------------------------+
| Thu Jul 7 14:03:50 UTC 2022 |
| |
| SSL Disabled |
+------------------------------------------+
+ exec java -XX:+CrashOnOutOfMemoryError -Dcom.sun.management.jmxremote.port=9091 -Dcom.sun.management.jmxremote.rmi.port=9091 -Djavax.net.ssl.trustStore=/opt/cryostat.d/truststore.p12 -Djavax.net.ssl.trustStorePassword=3V_jsIWmylP7deucYX6HELEEzhyFrH-X -Dcom.sun.management.jmxremote.autodiscovery=true -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.registry.ssl=false -cp '/app/resources:/app/classes:/app/libs/cryostat-core-2.12.0.jar:/app/libs/common-7.1.1.jar:/app/libs/encoder-1.2.2.jar:/app/libs/flightrecorder-7.1.1.jar:/app/libs/flightrecorder.rules-7.1.1.jar:/app/libs/flightrecorder.rules.jdk-7.1.1.jar:/app/libs/nashorn-core-15.4.jar:/app/libs/asm-7.3.1.jar:/app/libs/asm-commons-7.3.1.jar:/app/libs/asm-analysis-7.3.1.jar:/app/libs/asm-tree-7.3.1.jar:/app/libs/asm-util-7.3.1.jar:/app/libs/openshift-client-5.4.1.jar:/app/libs/openshift-model-5.4.1.jar:/app/libs/kubernetes-model-common-5.4.1.jar:/app/libs/jackson-annotations-2.11.2.jar:/app/libs/openshift-model-operator-5.4.1.jar:/app/libs/openshift-model-operatorhub-5.4.1.jar:/app/libs/openshift-model-monitoring-5.4.1.jar:/app/libs/openshift-model-console-5.4.1.jar:/app/libs/kubernetes-client-5.4.1.jar:/app/libs/kubernetes-model-core-5.4.1.jar:/app/libs/kubernetes-model-rbac-5.4.1.jar:/app/libs/kubernetes-model-admissionregistration-5.4.1.jar:/app/libs/kubernetes-model-apps-5.4.1.jar:/app/libs/kubernetes-model-autoscaling-5.4.1.jar:/app/libs/kubernetes-model-apiextensions-5.4.1.jar:/app/libs/kubernetes-model-batch-5.4.1.jar:/app/libs/kubernetes-model-certificates-5.4.1.jar:/app/libs/kubernetes-model-coordination-5.4.1.jar:/app/libs/kubernetes-model-discovery-5.4.1.jar:/app/libs/kubernetes-model-events-5.4.1.jar:/app/libs/kubernetes-model-extensions-5.4.1.jar:/app/libs/kubernetes-model-flowcontrol-5.4.1.jar:/app/libs/kubernetes-model-networking-5.4.1.jar:/app/libs/kubernetes-model-metrics-5.4.1.jar:/app/libs/kubernetes-model-policy-5.4.1.jar:/app/libs/kubernetes-model-scheduling-5.4.1.jar:/app/libs/kubernetes-model-storageclass-5.4.1.jar:/app/libs/kubernetes-model-node-5.4.1.jar:/app/libs/okhttp-3.12.12.jar:/app/libs/okio-1.15.0.jar:/app/libs/logging-interceptor-3.12.12.jar:/app/libs/slf4j-api-1.7.30.jar:/app/libs/jackson-dataformat-yaml-2.11.2.jar:/app/libs/snakeyaml-1.26.jar:/app/libs/jackson-datatype-jsr310-2.11.2.jar:/app/libs/jackson-databind-2.11.2.jar:/app/libs/jackson-core-2.11.2.jar:/app/libs/zjsonpatch-0.3.0.jar:/app/libs/generex-1.0.2.jar:/app/libs/automaton-1.11-8.jar:/app/libs/dagger-2.34.1.jar:/app/libs/javax.inject-1.jar:/app/libs/commons-lang3-3.12.0.jar:/app/libs/commons-codec-1.15.jar:/app/libs/commons-io-2.8.0.jar:/app/libs/commons-validator-1.7.jar:/app/libs/commons-beanutils-1.9.4.jar:/app/libs/commons-digester-2.1.jar:/app/libs/commons-logging-1.2.jar:/app/libs/commons-collections-3.2.2.jar:/app/libs/httpclient-4.5.13.jar:/app/libs/httpcore-4.4.13.jar:/app/libs/vertx-web-4.2.5.jar:/app/libs/vertx-web-common-4.2.5.jar:/app/libs/vertx-auth-common-4.2.5.jar:/app/libs/vertx-bridge-common-4.2.5.jar:/app/libs/vertx-core-4.2.5.jar:/app/libs/netty-common-4.1.74.Final.jar:/app/libs/netty-buffer-4.1.74.Final.jar:/app/libs/netty-transport-4.1.74.Final.jar:/app/libs/netty-handler-4.1.74.Final.jar:/app/libs/netty-codec-4.1.74.Final.jar:/app/libs/netty-tcnative-classes-2.0.48.Final.jar:/app/libs/netty-handler-proxy-4.1.74.Final.jar:/app/libs/netty-codec-socks-4.1.74.Final.jar:/app/libs/netty-codec-http-4.1.74.Final.jar:/app/libs/netty-codec-http2-4.1.74.Final.jar:/app/libs/netty-resolver-4.1.74.Final.jar:/app/libs/netty-resolver-dns-4.1.74.Final.jar:/app/libs/netty-codec-dns-4.1.74.Final.jar:/app/libs/vertx-web-client-4.2.5.jar:/app/libs/vertx-web-graphql-4.2.5.jar:/app/libs/graphql-java-17.3.jar:/app/libs/java-dataloader-3.1.0.jar:/app/libs/antlr4-runtime-4.9.2.jar:/app/libs/reactive-streams-1.0.3.jar:/app/libs/graphql-java-extended-scalars-17.0.jar:/app/libs/nimbus-jose-jwt-9.16.1.jar:/app/libs/jcip-annotations-1.0-1.jar:/app/libs/bcprov-jdk15on-1.69.jar:/app/libs/slf4j-jdk14-1.7.30.jar:/app/libs/gson-2.8.9.jar:/app/libs/caffeine-3.0.1.jar:/app/libs/jsoup-1.14.2.jar:/*' @/app/jib-main-class-file
Jul 07, 2022 2:03:52 PM io.cryostat.core.log.Logger info
INFO: cryostat started.
Jul 07, 2022 2:03:52 PM io.cryostat.core.log.Logger info
INFO: Selected NoSSL strategy
Jul 07, 2022 2:03:52 PM io.cryostat.core.log.Logger warn
WARNING: No available SSL certificates. Fallback to plain HTTP.
Jul 07, 2022 2:03:52 PM io.cryostat.core.log.Logger info
INFO: Local config path set as /opt/cryostat.d/conf.d
Jul 07, 2022 2:03:52 PM io.fabric8.kubernetes.client.Config tryServiceAccount
WARNING: Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
Jul 07, 2022 2:03:53 PM io.fabric8.kubernetes.client.Config tryServiceAccount
WARNING: Error reading service account token from: [/var/run/secrets/kubernetes.io/serviceaccount/token]. Ignoring.
Jul 07, 2022 2:03:53 PM io.cryostat.core.log.Logger info
INFO: Selected Default Platform Strategy
Jul 07, 2022 2:03:53 PM io.cryostat.core.log.Logger info
INFO: HTTP service running on http://localhost:8181
Jul 07, 2022 2:03:53 PM io.cryostat.core.log.Logger info
INFO: HTTP Server Verticle Started
Jul 07, 2022 2:03:53 PM io.cryostat.core.log.Logger info
INFO: Selecting platform default AuthManager "io.cryostat.net.NoopAuthManager"
Jul 07, 2022 2:03:53 PM io.cryostat.core.log.Logger info
INFO: Local save path for flight recordings set as /opt/cryostat.d/recordings.d
Jul 07, 2022 2:03:54 PM io.cryostat.core.log.Logger info
INFO: Max concurrent WebSocket connections: 2147483647
Jul 07, 2022 2:03:54 PM io.cryostat.core.log.Logger info
INFO: MessagingServer Verticle Started
Jul 07, 2022 2:03:54 PM io.cryostat.core.log.Logger info
INFO: JDP Discovery started
Jul 07, 2022 2:03:54 PM io.cryostat.core.log.Logger info
INFO: RuleProcessor Verticle Started
Jul 07, 2022 2:03:54 PM io.cryostat.core.log.Logger info
INFO: WebServer Verticle Started
Jul 07, 2022 2:03:56 PM io.cryostat.core.log.Logger info
INFO: Outgoing WS message: {"meta":{"category":"TargetJvmDiscovery","type":{"type":"application","subType":"json"},"serverTime":1657202636},"message":{"event":{"serviceRef":{"connectUrl":"service:jmx:rmi:///jndi/rmi://cryostat-itests:9091/jmxrmi","alias":"io.cryostat.Cryostat","labels":{},"annotations":{"platform":{},"cryostat":{"HOST":"cryostat-itests","PORT":"9091","JAVA_MAIN":"io.cryostat.Cryostat"}}},"kind":"FOUND"}}}
Jul 07, 2022 2:04:01 PM io.cryostat.core.log.Logger info
INFO: (10.0.2.100:55862): GET /health 200 275ms
Interestingly, the client says that
...
...
...
Copying config sha256:ddd560e6727d5859b7f3d1b5df4c4e54504dbf78a08fde2de62eba361e25a047
Writing manifest to image destination
Storing signatures
e9d3cbb426a00dee0203d967ea752a693a62fe22889c6ddf4b6085ea91222326
[INFO]
[INFO] --- exec-maven-plugin:3.0.0:exec (start-cryostat) @ cryostat ---
c8ca7e704f831e7c98993bddb0273e4287e446d0d883629ab3af8f0897b31a9d
[INFO]
[INFO] --- exec-maven-plugin:3.0.0:exec (wait-for-cryostat) @ cryostat ---
curl: (56) Recv failure: Connection reset by peer
[INFO]
[INFO] --- exec-maven-plugin:3.0.0:exec (wait-for-jfr-datasource) @ cryostat ---
[INFO]
[INFO] --- exec-maven-plugin:3.0.0:exec (wait-for-grafana) @ cryostat ---
[INFO]
[INFO] --- maven-failsafe-plugin:2.22.2:integration-test (default-cli) @ cryostat ---
[INFO] No tests to run.
...
...
...
I'm going to take a break from this task, since it will eventually be very much affected by the #1016 issue. There is also some really weird behaviour with how the ParsedHeaderValues work and I probably shouldn't keep going until the code is refactored.
Okay, that's fine. I've marked this PR as depending on that task so that we don't lose track.