polylith
polylith copied to clipboard
Memory consumption is very high on large projects
Describe the bug
This is a bit vague -- sorry -- but our project at work has gotten large enough (19 projects, 107 components) that when a low-level component (or deps.edn file) is updated and it triggers a test run of almost all projects and all components, our CI job gets killed for memory usage (BitBucket, 2x instance, already the largest we can have).
To Reproduce Very hard to reproduce!
Expected behavior
poly test should free up heap and non-heap that it doesn't use (tricky: I suspect at least some of this is the large classloader space created by so many projects and so many components, but haven't proved that yet).
Partial Workaround
We're running a fork of Polylith that has a call to (System/gc) in the test orchestrator after each test suite has run. This only signals to the JVM that we'd like garbage collection as a hint, but it does seem to have been sufficient to reduce the memory footprint most of the time. See https://github.com/seancorfield/polylith/compare/4e946eab081cdef317377da2e9b2f61158dbc422..as-alias
We may continue to tweak min/max heap sizes via JVM options to see if we can get a better outcome, but we've already had to set min/max to control heap usage on other environments so we're threading a needle here already. I don't know if there's an "ideal" solution to this (but Polylith's classloader isolation is problematic in a couple of areas as noted in other discussions on Slack and other GH issues).