Slow execution of features.sh, during docker build
In doing a docker build (from Win10) with
FROM openliberty/open-liberty:20.0.0.12-kernel-slim-java11-openj9-ubi
...
RUN features.sh
and server.xml feature set
<featureManager>
<feature>cdi-2.0 </feature>
<feature>jaxrs-2.1</feature>
<feature>mpMetrics-2.3</feature>
<feature>mpHealth-2.2</feature>
<feature>mpConfig-1.4</feature>
</featureManager>
it takes 5 min. to execute the features.sh step, on a fairly slow internet connection.
COMPARISONS
- If I take the same project outside of the docker build and do a native/host execution of the command run in features.sh:
time featureUtility installServerFeatures --acceptLicense defaultServer --noCache
it takes about the same time. So there isn't a container penalty per se.
- However, if I turn on caching, and run the following command:
time featureUtility installServerFeatures --acceptLicense defaultServer
then run it again with the cache populated, it goes down to 45 sec.
QUESTION
~~Could we consider building the cached features into the mvn repo of the kernel-slim image? Yes it would make this image bigger but would still achieve the goal of keeping the application images we build smaller.~~ UPDATE: That idea doesn't make sense, deleting it.
If we were to put a cache of the features into kernel-slim then we haven't saved any space in the base image which was the purpose of doing a kernel-slim in the first place. We might as well just go back to having every feature already pre-installed.
@ericglau
I can confirm I see this too. My internet connection speed must be worse than the OP, because it's not uncommon for it to take 10 minutes or more on this step. At this point, I think the "full" image should be used during iterative dev, and only use the kernel-slim image for production, since it makes it take way too long to test a one-line fix. It also makes the docker push you do afterwards take longer, because the layer containing all of the downloaded features has to be pushed (especially painful for those of us whose upload speed is even worse than our download speed). I'm resorting to sneaking updated war files into my pod (via oc cp myApp.war <pod>:/opt/ol/wlp/usr/servers/defaultServer/apps/) for unit testing because the full docker build/push is too slow to be productive.
This is being addressed here: https://github.com/OpenLiberty/open-liberty/issues/16700
@scottkurz , changes went into 21.0.0.8 to make this faster. Can you try this again?
Hi @jdmcclur ... seems to have helped.. On my win10 laptop, I ran three runs with each of 21.0.0.7 and 21.0.0.8 with a two-stage build, and manually used a stop watch. I captured the before & after along with features.sh and averaged them out
| (time sec) | 08 run 1 | 08 run 2 | 08 run 3 | 07 run 1 | 07 run 2 | 07 run 3 |
|---|---|---|---|---|---|---|
| Before (stage 1, mvn downloads) | 33 | 40 | 44 | 59 | 38 | 55 |
| features.sh | 37 | 51 | 49 | 73 | 71 | 79 |
| After (includes configure.sh) | 28 | 39 | 50 | 38 | 42 | 40 |
| AVGS | 21.0.0.8 | 21.0.0.7 | ||||
| Before (stage 1, mvn downloads) | 39 | 51 | ||||
| features.sh | 46 | 74 | ||||
| After (includes configure.sh) | 39 | 40 |
I'm encouraged here.. I think about 45 seconds is quick enough to keep me from getting tempted to get distracted and start doing something else :) Though it still might be valid to make the tradeoff and prefer the full image in certain cases.