libreant
libreant copied to clipboard
Dockerfile to build a portable testing environment
Let't try this again. It is only one commit. Two simple files :)
@leophys as mentioned in #317 I've already worked on the docker files. You can find my previous work about docker integration on my docker repo: https://github.com/ael-code/libreant/tree/docker.
I understand that starting from scratch is a good learning opportunity. In any case I would like to work together on this issue and investigate the best docker integration approach.
I've followed an approach that present a big difference from yours. In fact according to the docker best practice 1 it is suggested to put each service on its own container. In our case we are just using two services: elasticsearch and libreant; that will result in two different container. The advantages with respect to your "all-in-one" container are:
- The possibility to use official elasticsearch images 2, thus avoiding the possibility to mess with the elasticsearch installation procedure.
- The container build process in general should be faster because we are going to change only the libreant side, thus the elasticsearch container should remain completely in cache.
- We can test libreant with different versions of elasticsearch by simply swapping the elasticsearch container.
- We can use docker-compose to orchestrate the integration of the two services.
The downside is that the command sequence to lunch libreant tests would be a little more complicated.
Blallo:
+# docker run --privileged --rm -ti libreant-box
Because we have to launch the services via
service elasticsearch start
andservice libreantd start
.
so what? is --privileged needed for this? I'm not 100% sure, but I don't think so.
Dear @ael-code 😊 At last we meet again (ok, cinematic but not very appropriate 😄). As I stated in the issue #317, I do like docker as a tool to allow automatic testing from scratch, not as a deployment tool. I indeed structured the Dockerfile as a bare installation on a single machine (for this also the virtualenv, it is the way recommended in the docs 1). Your suggested approach eases the pain of developing and allows altogether a smooth conteinerized deploying. As I said, this is not my favourite approach. If you think, as I do, this is the future and that we should pander such future, then let us proceed the way you suggest.
Blallo: +# docker run --privileged --rm -ti libreant-box Because we have to launch the services via
service elasticsearch start
andservice libreantd start
.
so what? is --privileged needed for this? I'm not 100% sure, but I don't think so.
I thougth so, but it seems you can also without that flag 😄 I'll edit the PR.
It seems that we have mostly 2 doubts:
compose or not
I'm not sure I fully understand the pro and con of each solution. @ael-code has been clear on why he developed using the "compose approach". What is @leophys reply on this? Why is a single container better for a development usecase?
virtualenv or not?
On this I have a clearer idea. Yes, the doc tells us to use virtualenv, so we should. However, the doc assumes a generic underlying box, on which dependency hello could happen. On docker we don't have this, because we control the underlying machine. I don't see how a virtualenv could be dangerous, though, so "sticking to the doc" might be fine for me, so to keep it all very coherent.
To clear up my point of view: I don't like Docker as a mean to let people forget how to install and deploy a server. The compose approach that @ael-code proposes to stick to is awesome in its simplicity. Indeed, I think it is the future and that we could think to support such workflow, providing our containerized libreant installation, allowing the happy developer to develop in such an easy environment.
But I am not comfortable thinking that we step down from the classic way: configuring and installing all the needed services in a proper environment. I think debian stable is a good distro to refer to as a deployment option and I structured the Dockerfile to resemble as much as possible as a full installation in a single machine, also to test the installation procedure.
I could propose a compromise: we could create the directory containers
as follows
└── containers
├── compose
│ ├── docker-compose.yml
│ └── Dockerfile
└── strecth
├── Dockerfile
└── libreantd
IIRC, from the root of the project it is then possible to launch the build as
$ docker build -ti libreant-oldway containers/stretch
As far as I understood (*) there is a way to launch docker-compose up
also with such directory structure.
Blallo, I think I am missing something. Specifically, I understand well that you want to provide a Dockerfile as a developer tool, not as a deployment tool. That's fine.
Still, I don't know why is your solution more fit to this case than the docker-compose
one.
I honestly find it hard to even understand docker strengths as a developer tool, so please be slow :)
Blallo, I think I am missing something. Specifically, I understand well that you want to provide a Dockerfile as a developer tool, not as a deployment tool. That's fine. Still, I don't know why is your solution more fit to this case than the docker-compose one.
It does not fit more for mere developing purposes, but I would say it stands as a typical installation case for the average developer/sysadmin and can be both a test for us to verify that a tipical installation reaches the end successfully and be used as development environment.
I honestly find it hard to even understand docker strengths as a developer tool, so please be slow :)
Docker on linux is nothing more than a cool chroot with namespaces and lots of automatic management. The success of Docker is that on macOS/Windows it offers the very same interface to developers/users but is based on different virtualization techniques. Therefore the developer on macOS can use the same command line of the linux developer.
Blallo:
Blallo, I think I am missing something. Specifically, I understand well that you want to provide a Dockerfile as a developer tool, not as a deployment tool. That's fine. Still, I don't know why is your solution more fit to this case than the docker-compose one.
It does not fit more for mere developing purposes, but I would say it stands as a typical installation case for the average developer/sysadmin and can be both a test for us to verify that a tipical installation reaches the end successfully and be used as development environment.
is the Dockerfile proposed by @ael-code less good at this? Please let me understand, as I think I am not understanding your point. Can we verify the installation procedure with @ael-code approach? Can we use it as a development environment? If both answers are YES, could you highlight any other pro/con?
I honestly find it hard to even understand docker strengths as a developer tool, so please be slow :)
Docker on linux is nothing more than a cool chroot with namespaces and lots of automatic management. The success of Docker is that on macOS/Windows it offers the very same interface to developers/users but is based on different virtualization techniques. Therefore the developer on macOS can use the same command line of the linux developer.
Running libreant is typically done with "libreant", so I doubt that this can be much harder on any other OS. Which we don't support, btw.
Elasticsearch installation can, instead, be harder, especially with regard to versions. But in that case, just running a docker container for elasticsearch which will expose itself on the appropriate port is enough.
Anyway, I think that having more development tools (and more deployment tools, too) can only be a good thing.
After out-of-band chat, I think we reached this agreement:
- this Dockerfile (or docker-compose) is not meant to be a way to test anything, but as a way to develop libreant easily
- using docker it should be possible to run unit tests
- using docker it should be possible to run libreant
Any other idea like
a test for us to verify that a tipical installation reaches the end successfully
are good ideas, but are deferred to a separate issue (already have some ideas here)
So?
So, coming back to the issue, it seems to me that the approach using docker-compose
is simpler and requires less maintainance. @ael-code himself reported that
The downside is that the command sequence to lunch libreant tests would be a little more complicated.
on which I ask for clarifications. What would be the typical command sequence in both scenarios?
Claim: After some investigation I realize that docker-compose is too restrictive to be used as development tool. On the other hand it has really nice feature that can be useful for production deployment.
Some of the problems that I've encountered:
-
Need for a specific docker-compose.test.yml file
First of all, the tests should be run with a standalone docker-compose configuration file with respect to the deployment one. A separate d-c file will allow to override the main libreant image command into something like
python2 setup.py test
. - The service dependency problem docker compose doesn't provide a way to make one service wait for another to be ready. In our case in order to run the libreant tests we need to wait for the ES database to be ready. The solution would be to use something like https://github.com/vishnubob/wait-for-it. The problem is that the script should be inside the libreant service image or into the libreant repo. In the former case this would imply to have also a separate Docker file for the libreant image used only for testing (there is no concept of "extend"). I won't make any further comment on the latter option :).
I think that the best approach to use containers to run tests is to use still two separate imaages for libreant and Elasticsearch and a simple bash script. It would be something like this:
- Pull the official elasticsearch image (probably cached)
- Build libreant image from the main Docker file of the project (probably cached)
- Start elasticsearch container
- Wait for the database to be up and running
- Start the libreant container sand lunch the tests inside it
Hi @ael-code
I think that the best approach to use containers to run tests is to use still two separate imaages for libreant and Elasticsearch and a simple bash script. It would be something like this:
- Pull the official elasticsearch image (probably cached)
- Build libreant image from the main Docker file of the project (probably cached)
- Start elasticsearch container
- Wait for the database to be up and running
- Start the libreant container sand lunch the tests inside it
Again, why decouple the elasticsearch instance from the container hosting libreant? I could imagine that the speed of building/starting up the services is significantly faster with the decoupled strategy, but then there appear the dependency problem. Do you think that these recommendations form docker to separate the services do really fit our needs?
Blallo:
Again, why decouple the elasticsearch instance from the container hosting libreant? I could imagine that the speed of building/starting up the services is significantly faster with the decoupled strategy, but then there appear the dependency problem.
well, on my computer, elasticsearch took not so few to startup. Something like 20seconds. That's ok if it's a one-time wait before starting testing, but if we're tearing it up and down every time, then developing with it doesn't feel good.
I think I really need testing those to understand how that will really work, but could you provide an estimate of the duration of the development iteration? If changing one line and re-running tests (which AFAIU requires redoing all the installation) takes more than 5 seconds, I think this can't be called a practical development environment.
-- boyska
My proposal is to "decompose" this PR into two different issues.
- The creation of a docker-based solution for developers; it should focus on simplicity and speed (because iterative development must not be hard!). Optimally, it should be easy for the developer to switch to a different version of elasticsearch; this is useful, for example, to debug an error that doesn't happen on the latest but that has been discovered by travis on older versions.
- The creation of docker-based Travis tests to test the installation procedure itself. These tests must be as similar as possible to the installation section of the doc, so we're basically if the documentation still applies. In the optimal case we should have several tests for several (or at least some) distributions.
What do you think?
I agree.
At this point I'd investigate (also as a mean to learn a little bit of travis) if it is possible to restructure the Dockerfile in this PR to use it with Travis.
I'd like to hear @ael-code on the docker-compose
approach. Do you still think it is not enough flexible to accommodate our needs to develop?
@leophys in any case did you successfully build the the container and run it?
I think you are missing the ps
command in order to run elasticsearch
@ael-code
@leophys in any case did you successfully build the the container and run it?
I think you are missing the ps command in order to run elasticsearch
I did and successfully ran the tests inside such container.
@leophys wrote:
I did and successfully ran the tests inside such container.
Oke I found the trick....
service start elasticsearch
invokes the elasticsearch start script at /usr/share/elasticsearch/bin/elasticsearch
that requires the ps
commands (that is provided by the procps
package).
The elasticsearch package does not mention procps
as dependency, but you actually installed as implicit dependency of systemd
.
Now the question is: why do we need systemd ??
the Travis part is on progress, meaning that #333 has provided scripts to build docker containers based on docs instructions, run unit tests inside them, and do basic integration tests. When it will be merged, it would just need to be integrated into Travis.
The "docker for developers" part has never started, I think