codewind icon indicating copy to clipboard operation
codewind copied to clipboard

MicroProfile project cache contents differs from template contents, causing slow, uncached initial image build of every new MicroProfile project

Open jgwest opened this issue 4 years ago • 3 comments

Codewind version: Latest built from master. OS: Confirmed on Windows and Linux

Description:

The MicroProfile cache Dockerfile (https://github.com/eclipse/codewind/blob/master/src/pfe/file-watcher/dockerfiles/liberty/libertyDockerfile I believe) that is used to build the cache currently differs from the actual MicroProfile template that is generated for the user.

This means that every time you create a new MicroProfile project, or disable/re-enable an existing project, you must wait 10-30 minutes for the container image to (re)build (with the vast majority of the time being the Java download, a separate issue).

From Jingfu:

Since the latest Dockerfile from template is changed (the change is before Java JDK download step in the Dockerfile, that’s why it always redownloading Java JDK instead of using the cache), our cache image is not helpful to speed up the docker image build process because it still uses the old Dockerfile to build the cache image. This issue should exist on all platforms on project creation & project enable

I also highly recommend creating an automated test case that detects when the template and cache Dockerfiles are out of sync, as it seems like we didn't catch this mismatch for a while, and this is a bad product behaviour for normal users w/ no obvious workaround.

Steps to reproduce:

  1. Stop all Codewind containers
  2. Clone, build, and run Codewind:
docker system prune --all
git clone [email protected]:eclipse/codewind
cd codewind
./run.sh
  1. In the IDE (Eclipse, in my case): create a new project, wait for it build and start. (Should be reproducible here, can also proceed to the next step to confirm it also affects disable/re-enable)
  2. Disable or delete the project, then enable/recreate the project...

jgwest avatar Aug 12 '19 14:08 jgwest

This bug is only about fixing the current libertyDockerfile to match the current template (eg should be a quick fix).

The other two bugs are about improving the general problems of cache invalidation w/o regeneration and/or slow Java download.

jgwest avatar Aug 12 '19 16:08 jgwest

I am looking into this issue, I find update cache image Dockerfile to match the Template Dockerfile is more complicated than I thought before because each time we have to ensure & update: 1 Cache image Dockerfile itself 2 Resources that COPY or ADD command uses in Dockerfile (eg. COPY /target/liberty/wlp/usr/servers/defaultServer /config/, ADD /artifacts/artifacts.tar.gz $HOME/artifacts)

So for cache image, instead of using the hardcoded Dockerfile and resources to build the cache image, we can pull the pre-built image as cache image from Dockerhub (under eclipse namespace), then when we build projects we add --cache-from <cache image> to the existing docker build command to let it uses the cache image to speed up the building image process.

By doing the approach above for cache image, advantages are: 1 Only takes ~15 secs to pull the cache image instead of taking 10+ mins to build the cache image 2 Don't need to update cache image Dockerfile and resources to match the Template any more, only thing we need to do is push cache image to Dockerhub 3 Can easily apply this approach to other project types (swift, lagom, etc) that take long time to build by pushing other project types' cache image to Dockerhub

@rajivnathan @elsony I also verify the two cases: 1 With --cache-from <cache image>, if we don't have cache image, docker engine will build the image from the beginning without using cache 2 With --cache-from <cache image>, if user update Dockerfile, docker engine will still use the cache before user's update

GeekArthur avatar Aug 13 '19 21:08 GeekArthur

Currently buildah doesn't fully support --cache-from <cache image>, but they do have stories open in their repository: Parent story: https://github.com/containers/buildah/issues/599 Child story: https://github.com/containers/buildah/issues/620 We will still use the current cache structure (build cache from scratch) until builah fully supports --cache-from <cache image>

GeekArthur avatar Aug 27 '19 18:08 GeekArthur