jetty-runtime
jetty-runtime copied to clipboard
Regression testing against typical web applications
regular builds should be tested against deployments of several common webapps such as spring pet clinic, to both demonstrate how to deploy those webapplication into gcloud flex and to test for regressions
A few simple apps to start out with could be pulled from GoogleCloudPlatform/getting-started-java.
@gregw This is a high priority for us. Can someone on your side help us with this? Running these tests locally would be sufficient for now. We can later integrate them into the CI server, when that's ready.
What do you mean by "running these tests locally"?
Due to the nature of how gcloud operates, we cannot run locally any of these images, they have to be deployed to gcloud to test against.
The issue Greg filed was about regression testing, in an automated fashion, several known webapps.
Webapps present on getting-started-java (once you fix the currently broken build on that repo)
- ./bookshelf/2-structured-data/target/bookshelf-2-1.0-SNAPSHOT.war
- ./bookshelf/3-binary-data/target/bookshelf-3-1.0-SNAPSHOT.war
- ./bookshelf/4-auth/target/bookshelf-4-1.0-SNAPSHOT.war
- ./bookshelf/5-logging/target/bookshelf-5-1.0-SNAPSHOT.war
- ./bookshelf/6-gce/target/bookshelf-6-1.0-SNAPSHOT.war
- ./bookshelf/optional-container-engine/target/bookshelf-gke-1.0-SNAPSHOT.war
- ./helloworld-compat/target/helloworld-compat-1.0-SNAPSHOT.war (appengine-web.xml present)
- ./helloworld-jsp/target/helloworld-jsp-1.0-SNAPSHOT.war
- ./helloworld-servlet/target/helloworld-servlet-1.0-SNAPSHOT.war
Do you want all of these presented as dockerized + gcloud deployables for flex only? (leaving out the helloworld-compat webapp)
Which repo would you like to see these in?
What I meant by "running locally" is that instead of having a CI server run the test script, we will invoke it from our personal machines. The tests themselves would involve running the images on GCP. We can then utilize the same tests for regression testing by adding them to a CI server.
See https://github.com/aslo/java-runtimes-test for the kind of thing I have in mind.
Let's start out with hello-world-servlet (exclude compat). We can add the more complex apps later when the testing pattern is established. The tests should go into this repo.
Looks like the various bookshelf webapps will require some extra work.
- needs Google Cloud Logging API operational first.
- seems to require java.util.logging infrastructure at server side
- needs access to mysql backend database somewhere
This will affect the dockerized + gcloud images for:
- ./bookshelf/2-structured-data/target/bookshelf-2-1.0-SNAPSHOT.war
- ./bookshelf/3-binary-data/target/bookshelf-3-1.0-SNAPSHOT.war
- ./bookshelf/4-auth/target/bookshelf-4-1.0-SNAPSHOT.war
- ./bookshelf/5-logging/target/bookshelf-5-1.0-SNAPSHOT.war
- ./bookshelf/6-gce/target/bookshelf-6-1.0-SNAPSHOT.war
- ./bookshelf/optional-container-engine/target/bookshelf-gke-1.0-SNAPSHOT.war
Are the war files from getting-started-java published in a maven repo anywhere?
I don't thinks so. Can we just build from source as part of the test script?
Just working out if a separate git repo is doable. if those wars were published in a maven repo somwhere, then having a separate git repo is easy
or if this build / dockerize / deploy is best done within the getting-started-java repo itself.
if this is the decision, then we'd only focus on the maven side in the getting-started-java repo, ignoring the gradle builds
if we do the CI, then separate repos are possible again, as you would just have downstream builds with a local (to the CI) maven repository managing the behavior between the git repositories.
I'm not sure I'm following what you mean by "separate repos". Can we just keep the test orchestration code in the this jetty-runtime repo?
sure, but where does that test orchestration code get its webapps?
the webapps live/exist on the getting-started-java git repository.
we could copy the source over to the jetty-runtime repo, but that seems like a bad idea.
we could copy the webapps themselves to jetty-runtime, but that's against google policy for including buildable artifacts in git repos, is a bad idea generally for maintenance, and is also highly discouraged by both maven and gradle.
The reason the example project https://github.com/aslo/java-runtimes-test works, is that everything is in one place. the webapp, the dockerfile, the deploy routines, the gcloud command line tests, the scripts, etc ...
I was thinking of just doing git clone and mvn package as part of the test script, but if that doesn't work, we can just copy the test application code over to this repo.
With the current split repositories, that wont work (yet).
as you cannot build jetty-runtime with maven, as the logging and openjdk8 artifacts (that maven itself needs) are not on a maven repository anywhere (not even a snapshot repository).
the build, using what we have currently, with no CI, looks like this.
git clone [email protected]:GoogleCloudPlatform/openjdk-runtime.git
cd openjdk-runtime
mvn clean install
cd ..
git clone https://github.com/GoogleCloudPlatform/getting-started-java.git
cd getting-started-java
mvn clean install
cd ..
git clone [email protected]:GoogleCloudPlatform/jetty-runtime.git
cd jetty-runtime
mvn clean install
cd ..
# new stuff
./run.sh
With each update to openjdk-runtime, or jetty-runtime, or getting-started-java, you'd have to manually build those in the correct order to test properly.
To do this right, you need the coordination between the git repositories and the artifacts that they produce.
Currently, there's the gcr.io for the docker images (which is a bad idea!!) and the maven local repository on the users machine.
To correct this, we proposed a durable CI handle this using Jenkins (something that travis and circle-ci cannot do btw).
First: the docker images produced by the build have to be isolated for that build-chain (with the current setup/structure, if 2 people happen to build the same chain, then there's a collision at gcr.io with different images on the same names and tags).
This would be accomplished by using a build system local docker repository, with build-id based tagging (either the jenkins default of a combination of repo + branch + commit + job id, or the use of the maven expanded SNAPSHOT ids), which is handed off to the downstream builds to use.
Next, The CI would also then use a build-chain contextualized maven local repository, utilizing the SNAPSHOTS from the upstream builds as artifact references in its own build.
Note: the use of the maven local repository is now mandatory.
You cannot use the simple form mvn package or mvn test anymore with this split repository setup.
You will have to use mvn install at a bare minimum as a replacement for building or testing.
Yes, I think using the local maven repository is fine, and building the openjdk-runtime first from source is ok too.
So the priority is to develop:
- "script" to deploy a
hello-world-servletstyle war and test that it is up and running. - the wars can be taken from a maven repo (either built locally or remotely released)
- Initially the "script" will be kicked off from a local build (perhaps a profile)
- Ideally the "script" will be kicked off from a persistent CI instance
- Once we have the script working, we can build out the number and type of wars deployed and improve the "up and running" tests to check more capabilities.
Note that as I think we need CI sooner rather than later for #7 this should be setup in parallel to the development of the test deploy "script"
Also note that I'm saying "script" as it may not necessarily be a shell script, whatever technology choice is most appropriate.