Quick start demo doesnt work on Windows with WSL 2 Ubuntu
I tried to run the quick start demo, but the ruby and python based images error out on exec. I don't think this is something that can be worked around locally?
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
714efec629c8 timescale/promscale-demo-load "python3 load.py" 26 seconds ago Restarting (1) 1 second ago promscale-demo-load-1
a5888d0c126c timescale/promscale-demo-generator "flask run --host=0.…" 26 seconds ago Restarting (1) 1 second ago promscale-demo-generator-1
a8a86bdcdf76 timescale/promscale-demo-lower "bundle exec ruby lo…" 26 seconds ago Restarting (1) 2 seconds ago promscale-demo-lower-1
8abcb8759d74 timescale/promscale-demo-special "flask run --host=0.…" 26 seconds ago Restarting (1) 4 seconds ago promscale-demo-special-1
c984a108aa65 timescale/promscale-demo-digit "flask run --host=0.…" 26 seconds ago Restarting (1) 3 seconds ago promscale-demo-digit-1
d272298bde24 timescale/promscale-demo-upper "flask run --host=0.…" 26 seconds ago Restarting (1) 2 seconds ago promscale-demo-upper-1
1b28137ef3e2 vineeth97/promscale-demo-grafana "/run.sh" 26 seconds ago Up 22 seconds 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp promscale-demo-grafana-1
b79f042859c4 otel/opentelemetry-collector:0.55.0 "/otelcol --config=/…" 26 seconds ago Up 23 seconds 0.0.0.0:4317-4318->4317-4318/tcp, :::4317-4318->4317-4318/tcp, 0.0.0.0:14268->14268/tcp, :::14268->14268/tcp, 55678-55679/tcp promscale-demo-collector-1
5c16d64a22ba prom/prometheus:latest "/bin/prometheus --c…" 26 seconds ago Up 24 seconds 0.0.0.0:9090->9090/tcp, :::9090->9090/tcp promscale-demo-prometheus-1
b3b41640c361 jaegertracing/jaeger-query:1.36.0 "/go/bin/query-linux…" 26 seconds ago Up 6 seconds 0.0.0.0:16686->16686/tcp, :::16686->16686/tcp promscale-demo-jaeger-1
4711057370f8 timescale/timescaledb-ha:pg14-latest "/docker-entrypoint.…" 26 seconds ago Up 25 seconds 8008/tcp, 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp, 8081/tcp promscale-demo-timescaledb-1
0b3371656b3f prom/alertmanager:latest "/bin/alertmanager -…" 26 seconds ago Up 25 seconds 0.0.0.0:9093->9093/tcp, :::9093->9093/tcp promscale-demo-alertmanager-1
072e34c59955 quay.io/prometheus/node-exporter "/bin/node_exporter" 26 seconds ago Up 25 seconds 0.0.0.0:9100->9100/tcp, :::9100->9100/tcp promscale-demo-node_exporter-1
$ docker logs promscale-demo-load-1
exec /usr/local/bin/python3: exec format error
exec /usr/local/bin/python3: exec format error
exec /usr/local/bin/python3: exec format error
exec /usr/local/bin/python3: exec format error
exec /usr/local/bin/python3: exec format error
exec /usr/local/bin/python3: exec format error
exec /usr/local/bin/python3: exec format error
exec /usr/local/bin/python3: exec format error
I'm able to reproduce the issue. Working on fixing the issue...
@martinothamar I managed to fix the issue before publishing the images to the timescale repo. Do you mind verifying it on your end by using images published in my repo?
To verify all you have to do is change the below image repo names in docker-compose.yaml:
-
timescale/promscale-demo-load->vineeth97/promscale-demo-load -
timescale/promscale-demo-generator->vineeth97/promscale-demo-generator -
timescale/promscale-demo-lower->vineeth97/promscale-demo-lower -
timescale/promscale-demo-special->vineeth97/promscale-demo-special -
timescale/promscale-demo-digit->vineeth97/promscale-demo-digit -
timescale/promscale-demo-upper->vineeth97/promscale-demo-upper
Sure! Will give it a go this evening
The updated images seem to run fine, I'm having a different issue now
Docker DNS resolution doesnt seem to work in my setup. Not sure what's wrong...

Jaeger is crashlooping on this error, presumably because of DNS issues
{"level":"fatal","ts":1659648363.3808448,"caller":"./main.go:106","msg":"Failed to init storage factory","error":"grpc-plugin builder failed to create a store: error connecting to remote storage: context deadline exceeded","stacktrace":"main.main.func1\n\t./main.go:106\ngithub.com/spf13/cobra.(*Command).execute\n\tgithub.com/spf13/[email protected]/command.go:872\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\tgithub.com/spf13/[email protected]/command.go:990\ngithub.com/spf13/cobra.(*Command).Execute\n\tgithub.com/spf13/[email protected]/command.go:918\nmain.main\n\t./main.go:166\nruntime.main\n\truntime/proc.go:250"}
I don't think this is related to the docker-compose config, but rather something fishy with my Docker install?
When I try to run in from powershell however it doesn't work
PS > docker-compose up -d
time="2022-08-04T23:16:07+02:00" level=warning msg="The \"PWD\" variable is not set. Defaulting to a blank string."
time="2022-08-04T23:16:07+02:00" level=warning msg="The \"PWD\" variable is not set. Defaulting to a blank string."
time="2022-08-04T23:16:07+02:00" level=warning msg="The \"PWD\" variable is not set. Defaulting to a blank string."
time="2022-08-04T23:16:07+02:00" level=warning msg="The \"PWD\" variable is not set. Defaulting to a blank string."
time="2022-08-04T23:16:07+02:00" level=warning msg="The \"PWD\" variable is not set. Defaulting to a blank string."
...
Error response from daemon: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "/prometheus.yml" to rootfs at "/etc/prometheus/prometheus.yml": mount /prometheus.yml:/etc/prometheus/prometheus.yml (via /proc/self/fd/6), flags: 0x5000: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type
I'm running the setup in a digital ocean ubuntu machine. I couldn't reproduce the DNS issues that you shared above. The connection to Promscale and other components just works fine for me.
Can you share more details on what's happening on your end?
Yeah I'm pretty sure its just my Docker setup that is bugged atm. I'll confirm tonight by trying this on another computer
Hi @martinothamar
Did you manage to re-run the quick start from your end? :)
I verified it on a ubuntu machine and it works. So closing the issue. Feel free to create a new issue, If you are seeing a similar issue.