cryostat-legacy
cryostat-legacy copied to clipboard
feat(discovery): use hibernate, h2database, postgresql
Part of #1028 Related to #1028 Related to #937 Related to #938 Related to #939 Fixes #942 Fixes #943
- feat(db): add in-memory h2 database w/ Hibernate
- extract StorageModule
- extract AbstractDao
- make database parameters configurable
- format sql in logs
- use jsonb for subtrees
- use file-based h2 by default
- minor cleanup
- rough setup for postgres db and switching from h2 in-memory, h2 file, and postgres remote
- minor postgres fixes
By default, H2 database in-memory will be used. There are various new environment variables that can be set to control various aspects of JPA/Hibernate for connecting to a database. The easy high-level way to test out each is to pass an argument to `smoketest.sh.
$ sh smoketest.sh # use h2 in-memory
$ sh smoketest.sh h2file # use h2 with file-based persistence, will write into the Cryostat conf dir
$ sh smoketest.sh postgres # spin up a PostgreSQL container using podman and connect to that remotely
As this implementation matures and stabilizes I think we should add a simple startup check that the expected h2 file-based location is accessible and writable, and default to h2:file. If not then log the problem and fall back to h2:mem. The environment variables would still be used to override this setting so that h2:mem can be forced, or a connection to an external Postgres instance can be established. The h2:mem database is obviously always private to the Cryostat instance, and the h2:file can be assumed to be as well, so it is OK for Cryostat (via hibernate) to initialize these databases and set up schemas etc. For an external Postgres instance it is likely unsafe for Cryostat/hibernate to modify the database schema, if the user Cryostat connects as even has permissions to do so, so it will be up to the end deploying user to ensure the Postgres instance is correctly configured. There is some more work we can do here to create schema scripts and things that are useful in development as well as for a reference for Postgres users.
I'm wondering if these hibernate logs printed should be telling us anything more? I'm not sure if they are telling anything useful...
Hibernate: /* select generatedAlias0 from PluginInfo as generatedAlias0 */ select plugininfo0_.id as id1_0_, plugininfo0_.callback as callback2_0_, plugininfo0_.realm as realm3_0_, plugininfo0_.subtree as subtree4_0_ from PluginInfo plugininfo0_
It may not be useful to look at right now, but it's just logging the database queries being made. Should something go wrong then the logs would probably be handy to have :-)
I guess this is unrelated but, is it a feature that automated rules don't start when we haven't authenticated a JVM with JMX auth? For example, when I run smoketest with CRYOSTAT_DISABLE_JMX_AUTH=false
(default), any automated rules won't start, even after I auth with user=smoketest,password=smoketest
. But when I set the env to true, it will.
Yea, that sounds unrelated. File a separate issue with reproduction steps please. Rules should run regardless of JMX auth, but it sounds like perhaps a failed JMX connection results in the rule no longer evaluating on its future scheduled times.
Also, seems the postgres functionality requires psql
to be installed. Should be documented somewhere later I think.
psql
wouldn't be needed by the Cryostat image at runtime, just needed if you're a developer running smoketest.sh
and want to test out a Postgres setup that way. We could still document that, I guess.
When making and deleting custom targets, it seems custom targets are still stored in json on disk. Is this right? According to this PR, it should be changed https://github.com/cryostatio/cryostat/issues/943.
Seems like the web-client side sends a post to v2/targets but that handler and its CustomTargetPlatformClient
hasn't been updated yet.
Ah yes, good catch! I got too focused on the JDP and KubeApi discovery mechanisms and brain farted on the custom targets. I can address that easily enough. Would you prefer I do that in this PR or as another follow-up?
Ah yes, good catch! I got too focused on the JDP and KubeApi discovery mechanisms and brain farted on the custom targets. I can address that easily enough. Would you prefer I do that in this PR or as another follow-up?
This PR is fine, no worries :-)
That's just because the quarkus-test
discovery plugin isn't publishing itself with any annotations, so your .annotations.cryostat
in the match expression is an empty map.
https://github.com/andrewazores/quarkus-test/blob/f83dbc85c67e1f54330547f9dbe7e8761e9a0811/src/main/java/org/acme/AppLifecycle.java#L45
This structure corresponds to https://github.com/cryostatio/cryostat/blob/main/src/main/java/io/cryostat/platform/discovery/TargetNode.java . The quarkus-test
plugin just doesn't populate anything for labels or annotations right now. You can see this in the match expression evaluator in the bottom right of your screenshot:
"labels": {},
"annotations": {
"platform": {},
"cryostat": {}
}
In the old-style discovery mechanisms, Cryostat uses various properties from the platform implementation to populate the annotations and labels, so you'll have information about the main class there when discovered via JDP, or Kubernetes labels when in OpenShift, etc. In this new-style, plugin push-based discovery, all of that information has to come from the plugin.
I'm not sure if this is just a function of smoketest being a dev testing deployment of cryostat or not, but the CustomTargets do not persist after restarting the application like it did before. Other than that comment/question, it looks great!
Try sh smoketest.sh h2file
or sh smoketest.sh postgres
. The current script default is an in-memory h2 database instance which doesn't actually persist anything, so it's always completely empty when you restart (ie all contents are lost on Cryostat shutdown). The file-backed h2 database and a proper Postgres instance both have real persistence and your custom target definitions should appear on a restart.
Try
sh smoketest.sh h2file
orsh smoketest.sh postgres
. The current script default is an in-memory h2 database instance which doesn't actually persist anything, so it's always completely empty when you restart. The file-backed h2 database and a proper Postgres instance both have real persistence and your custom target definitions should appear on a restart.
I think I did tried on all 3 different instances and it did the same thing, but if it's working on your end its more than likely my problem again, :/ . I will retry again.
Hmm. It does work for me with h2file
. @tthvo @hareetd can you test that?
It isn't working for me either with h2file
. To double-check, the expected workflow is that once a custom target definition is posted to v2/targets
(regardless of whether such a custom target actually exists or not), it's added to the CustomTargetPlatformClient
, resulting in a TargetDiscoveryEvent
being issued, which should trigger an update of the platform's discovery tree in persistant storage, correct?
Yeah, all 3 for me still aren't persisting. I've just been (w/ web-client running npm run start:dev
on :9000)
- creating a custom target
- ctrl+C
- rerun
CRYOSTAT_DISABLE_SSL=true CRYOSTAT_DISABLE_JMX_AUTH=true CRYOSTAT_CORS_ORIGIN=http://localhost:9000 sh smoketest.sh h2file
And the target does not comeback after restarting.
Weird. I tried deleting the conf/h2.mv.db
and conf/h2.trace.db
files and starting from scratch and now I also don't see the definition getting persisted. Maybe something to do with a schema change I made in the middle somewhere here? I'll dig in.
Oh, I think I see the problem, though I'm not sure why deleting the h2.*.db
files had anything to do with it. When the BuiltInDiscovery
(replacement for MergingPlatformClient
and bridge between DiscoveryStorage
and other PlatformClient
s) shuts down it deregisters all of the realms that were created for the old platform clients, which removes the whole subtree from the database. On startup it registers a new realm for each of the platform clients and immediately repopulates the database by querying each platform client for its discovery tree and adding that as a subtree for the realm. Since the CustomTargetPlatformClient
no longer has any storage of its own other than the database, this just ends up meaning that custom targets no longer have any sort of effective persistent storage. Funnily enough, it would persist as expected if the Cryostat server crashed and shut down uncleanly.
Bringing back separate persistent storage for custom targets isn't right, so the actual solution will have to revamp how the BuiltInDiscovery
does the startup/shutdown registration/deregistration/reregistration of platform clients.
Ah wait, deleting custom targets that are loaded from the database no longer works. You can only delete custom targets that are defined in the current run.
Ah wait, deleting custom targets that are loaded from the database no longer works. You can only delete custom targets that are defined in the current run.
Solved in latest commit.
I just run into a situation where killing quarkus-test(service:jmx:rmi:///jndi/rmi://cryostat:9097/jmxrmi)
with podman kill
does not remove the target. I guess the target didn't have enough time to notify Cryostat since receiving SIGKILL. I wonder how we go ahead and handle this?
SIGKILL
will always terminate the process/container without leaving it time to do any clean up, like notifying Cryostat that it's going away. podman stop
will let it have up to 10 seconds (default) to perform such clean up.
The callback URL that plugins provide at registration time is meant to be the mechanism for handling this. Currently Cryostat only checks those URLs and prunes unreachable plugins when it starts up, but we can make this happen periodically too.
Somehow i cannot delete any of the targets other than custom targets. Is that intentional?
Yes, you can only delete custom targets. It's unfortunate that the UI still allows you to click the button on other targets discovered via automatic means, but that predates this PR.
Ah makes sense!
Yes, you can only delete custom targets. It's unfortunate that the UI still allows you to click the button on other targets discovered via automatic means, but that predates this PR.
That said, with the new-ish discovery tree API, the UI could use that to both populate the list (by just looking for leaves of the tree and listing them) as well as toggle the delete button (if the selected leaf was part of the Custom Targets realm subtree). That's a bit inefficient and annoying to re-implement on the frontend, though.
Maybe we can just add ex. target.annotations.cryostat["realm"]="Custom Targets"
to the ServiceRef
s. This way the standard target listing API still contains that information about the realm and it doesn't need to be determined by descending through the whole discovery tree.