ngsi-timeseries-api
ngsi-timeseries-api copied to clipboard
ql-pg-init does not work if database is already present
Describe the bug
The quantumleap-pg-init with tag 0.8.0 exits with an error code and prevents quantumleap to start (given a k8s deployment with the official QL helm chart) if the desired database already exists.
0.7.6 worked fine in this setup.
To Reproduce Steps to reproduce the behavior:
- run the quantumleap-pg-init:0.7.6 docker image -> database gets created
- run the quantumleap-pg-init:0.7.6 docker image again -> everything fine
- run the quantumleap-pg-init:0.8. docker image (same env) -> exits with error code 64
Expected behavior The container should exit with status code 0
Additional context Add any other context about the problem here.
/ngsi-timeseries-api/timescale-container # python quantumleap-db-setup.py \
> --ql-db-pass "$QL_DB_PASS" \
> --ql-db-init-dir "$QL_DB_INIT_DIR" \
> --pg-host "$PG_HOST" \
> --pg-pass "$PG_PASS"
Bootstrapping QuantumLeap DB: quantumleap
Skipping bootstrap as DB already exists: quantumleap
/ngsi-timeseries-api/timescale-container # echo $?
64
skipping is fine, but as skipping is no error,the return code should be zero.
@modularTaco thanks so much for reporting this, we really appreciate the time you took to test and put together the info we need to fix it!
Stale issue message
Hi, sorry for digging this up, but if I'm not mistaken this bug is still present in current releases. This commit broke the database initialization routine: https://github.com/orchestracities/ngsi-timeseries-api/commit/607f16d6109c9b0db5e5193de9345a7dcbb4d9f4#diff-139c92914e24029d758dad6fd602a084634cdb13318f1ef0d697af0bce0ea5d0
For this makes version newer than 0.7.6 unusable when combining Quantum Leap with TimescaleDB in a Kubernetes deployment. Are there any plans to fix this issue?
Hi @Panzki,
thanks so much for the analysis, much appreciated! Now I can see exactly what went wrong, thanks so much for fishing out that diff!
Are there any plans to fix this issue?
Yep, we've got to fix this soon since we're using the latest QL version in a couple of K8s-based deployments ourselves too, but I can't give you an ETA for the fix at the moment though...