PrometheusTube icon indicating copy to clipboard operation
PrometheusTube copied to clipboard

Build error when running up.sh

Open salimfadhley opened this issue 2 years ago • 11 comments

For context:

I'm trying to get this running on an entirely clean docker-in-docker system.

Steps to reproduce:

  • git clone
  • up.sh

root@105f52fe9164:/hostroot/volume1/home/sal/software/PrometheusTube# ./up.sh

  • touch .secrets.env
  • DOCKER_DEFAULT_PLATFORM=linux/amd64
  • DOCKER_BUILDKIT=1
  • COMPOSE_DOCKER_CLI_BUILD=1
  • docker build -f Dockerfile.template -t gen . [+] Building 0.6s (7/7) FINISHED docker:default => [internal] load .dockerignore 0.1s => => transferring context: 2B 0.0s => [internal] load build definition from Dockerfile.template 0.0s => => transferring dockerfile: 181B 0.0s => [internal] load metadata for docker.io/library/python:latest 0.5s => [1/3] FROM docker.io/library/python:latest@sha256:31ceea009f42df76371a8fb94fa191f988a25847a228dbeac35b6f8d2518a6ef 0.0s => CACHED [2/3] WORKDIR /gen 0.0s => CACHED [3/3] RUN pip3 install jinja2 pycryptodome 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:c9973f2fff2a06893995c83599d77d86ebbc1b332684dc299d3c66cbc3db9ee7 0.0s => => naming to docker.io/library/gen 0.0s ++ pwd
  • docker run -v /hostroot/volume1/home/sal/software/PrometheusTube:/gen -t gen localhost python3: can't open file '/gen/templates/generate-compose.py': [Errno 2] No such file or directory root@105f52fe9164:/hostroot/volume1/home/sal/software/PrometheusTube#

salimfadhley avatar Nov 25 '23 20:11 salimfadhley

mount looks fine to me, idk what we're missing here

an alternative would be to manually run generate-compose.py on the host machine, but that's a pain

still working on usability issues so maybe I'll try to repro later

horahoradev avatar Nov 25 '23 20:11 horahoradev

Anything I can do to test this hypothesis?

Just to clarify - the system I am running on is kinda odd. It's an Asustor NAS which provides a very bare-bones host OS. All I can really do is spin up docker and then docker into a more fully featured operating system. At the moment, all i re-mounted was a basic ubuntu image with access to the docker demon and the root filesystem. I didn't remount devfs or anything fancy.

One really common use-case for self-hosters is to just run stuff in Portainer. In that set-up all we can really do is copy a docker-compose file into a UI and just run it, so the current script-based installation really limits how this thing can be run. It's also going to appeal only to self-hosters with a lot of time.

Is it possible that you could ship a pre-compiled docker-compose in the root of the project, that way people can copy it, change some variables and then quickly boot into the system?

salimfadhley avatar Nov 25 '23 20:11 salimfadhley

Is portainer the docker-in-docker mechanism you're referring to?

in this circumstance I probably could. are you accessing the service from another location in your network, or is it on localhost?

getting rid of the templated docker-compose will take some work. I want to make this easier to run, but it's tricky ofc.

maybe I can ship all of the services in a single container, and publish the image... hmm...

horahoradev avatar Nov 25 '23 21:11 horahoradev

Portainer is just a dockerized GUI for managing docker. I'm not using it in this circumstance. It's what I'd like to use. It's a very common way for self-hosting apps. You just paste a docker-compose file into the GUI and it runs it.

I'm running docker-in-docker on the actual host. Here's what I did:

  • On the the host, I build a docker image containing ubuntu, git and docker
  • Booted that image, and then mounted the host's root filesystem as /rootfs
  • Git clone, up.sh

The issue is that the host OS is really barebones. It includes the essential NAS stuff, some basic UNIX commands, docker and not a whole lot else, so no Python3. I take advantage of the fact that it can run Docker

maybe I can ship all of the services in a single container, and publish the image... hmm...

Oh no! That would be a mess. Why not have a dockerfile with multiple targets (supported since ages ago), and then a docker-compose file that references each of those targets.

salimfadhley avatar Nov 25 '23 22:11 salimfadhley

Just to be clear, in your dockerfile you can have:

FROM --platform=linux/amd64 python:${PYTHON_VERSION}-slim-bullseye as python_stuff
... python build instructions

FROM go:latest AS go_stuff
... go stuff

And then in your compose file you can specify go_stuff and python_stuff as the names of the locally built images:

  python_service:
    platform: "linux/amd64"
    build:
      target: python_stuff
      context: .
      args:
        SOME_ENVIRONMENT_VARIABLE: 'Blah'

But it would be much better if people didn't need to build anything locally - if you have stuff already released on DockerHub it means people who are not running in build-friendly environments (i.e. me) can just docker-compose up and fetch down the latest released versions.

salimfadhley avatar Nov 25 '23 22:11 salimfadhley

right, I'm proposing we have a single, multi-stage docker image that anyone can run to get the whole service running. I will publish the image, and anyone can just run the finished product. No one needs to build from source, they just pull the single image. Obviously that complicates a few things, but it simplifies setup and e.g. log aggregation. Setup would be a single command with relatively fewer moving parts.

env vars would be a little tricky, i'd need to move to .env files or something. i'll look into either that or simplifying the templating stuff tomorrow, i have to timebox my work on this project. there's too much to be done on setup for one day.

horahoradev avatar Nov 25 '23 23:11 horahoradev

I haven't really articulated myself well here, but the problem really is:

  1. setup should be one command
  2. should work for all platforms
  3. should require minimal dependencies

and that's really hard, because this is a pretty heavy distributed system. Potential solutions:

  1. simplify the docker-compose templating stuff, ship a single compose file that accepts env var arguments for the origin
  2. ship some weird systemd-in-docker solution with a single published docker image, which has all the right defaults, and people can just pull down and run
  3. something else?

horahoradev avatar Nov 25 '23 23:11 horahoradev

give me a few days to rip things out and simplify the process, there's a lot going on. In the end, i should have have a published docker-compose file in source control that people can just run. tomorrow might be enough, we'll see

horahoradev avatar Nov 25 '23 23:11 horahoradev

I will publish the image, and anyone can just run the finished product. No one needs to build from source, they just pull the single image.

I don't think there's any benefit in having a "single image" for all of the containers that have to run. You can have as many targets as you want, plus if you are dealing with compiled languages you probably want to compile in a compilation image, and then copy the executable output to an execution image. The alternative would be a very bloated image that ships all the compiler and dev tooling.

salimfadhley avatar Nov 26 '23 00:11 salimfadhley

2. ship some weird systemd-in-docker solution with a single published docker image, which has all the right defaults, and people can just pull down and run

I'm curious about what special issues Prometheus-tube might have that cannot be dealt with by normal Docker-compose stuff.

Most projects make things easy by shipping a docker-compose.yaml and Dockerfile in the root directory of the project. It's a given that you usually have to customize the project a bit, for example because ports. storage locations are always different. Some self-hosters might already have a database up and running and might not want to spin up an extra.

I notice that you compile the docker-compose file from a template, so couldn't you just pre-compile a bunch of them as part of your github-actions tooling? You'd have a developer docker-compose file and a typical user docker-compose file. Anybody wishing for something more complex can hand-edit or recompile themselves.

simplify the docker-compose templating stuff, ship a single compose file that accepts env var arguments for the origin

This would be great. And that's a really "normal" way of using Docker Compose. If you don't want to customize the project all that much, a docker-compose file should be all you need.

salimfadhley avatar Nov 26 '23 00:11 salimfadhley

FYI, I've discovered a likely cause - user error:

When building docker in docker, bind mounts refer to paths in the host-system, not the inner system.

salimfadhley avatar Nov 26 '23 21:11 salimfadhley