cube
cube copied to clipboard
Cube Deployment extremely slow using Sample Production Docker Compose
Problem
I'm testing out a local deployment of a production-ready stack https://cube.dev/docs/product/deployment/core#configuration.
As soon as I run docker compose up, the router and workers are all running at 100% CPU. When I try a basic query of a single measure in the playground, it takes about 10 seconds to get a result.
Previously, when using the basic dev mode docker compose https://cube.dev/docs/product/getting-started/core/create-a-project, everything ran very smoothly and query results comes back quickly.
The log also shows these errors repeated in big chunks from time to time, I'm not sure what the connection issue is or why it is dropping cache.
2024-05-15 17:01:19 faro-cube-cube_refresh_worker-1 | Error: connect ECONNREFUSED 172.27.0.2:3030
2024-05-15 17:01:19 faro-cube-cube_refresh_worker-1 | at QueryQueue.parseResult (/cube/node_modules/@cubejs-backend/query-orchestrator/src/orchestrator/QueryQueue.js:384:13)
2024-05-15 17:01:19 faro-cube-cube_refresh_worker-1 | at QueryQueue.executeInQueue (/cube/node_modules/@cubejs-backend/query-orchestrator/src/orchestrator/QueryQueue.js:226:19)
2024-05-15 17:01:19 faro-cube-cube_refresh_worker-1 | at runMicrotasks (<anonymous>)
2024-05-15 17:01:19 faro-cube-cube_refresh_worker-1 | at processTicksAndRejections (node:internal/process/task_queues:96:5)
2024-05-15 17:01:19 faro-cube-cube_refresh_worker-1 | at async Promise.all (index 0)
2024-05-15 17:01:19 faro-cube-cube_refresh_worker-1 | Dropping Cache: scheduler-cbeb2091-5a21-4bfb-960f-00b75d1c8560
2024-05-15 17:01:19 faro-cube-cube_refresh_worker-1 | {
2024-05-15 17:01:19 faro-cube-cube_refresh_worker-1 | "cacheKey": [
2024-05-15 17:01:19 faro-cube-cube_refresh_worker-1 | "SELECT FLOOR((UNIX_TIMESTAMP()) / 10) as refresh_key",
2024-05-15 17:01:19 faro-cube-cube_refresh_worker-1 | []
2024-05-15 17:01:19 faro-cube-cube_refresh_worker-1 | ],
2024-05-15 17:01:19 faro-cube-cube_refresh_worker-1 | "spanId": "111cbd834d7196c3f528084dbb3a94e0"
2024-05-15 17:01:19 faro-cube-cube_refresh_worker-1 | }
My docker-compose.yml
version: "2.2"
services:
cube_api:
restart: always
image: cubejs/cube:latest
ports:
- 4000:4000
environment:
- CUBEJS_DB_TYPE=postgres
- CUBEJS_DB_HOST=host.docker.internal
- CUBEJS_DB_PORT=****
- CUBEJS_DB_NAME=****
- CUBEJS_DB_USER=****
- CUBEJS_DB_PASS=****
- CUBEJS_CUBESTORE_HOST=cubestore_router
- CUBEJS_API_SECRET=****
volumes:
- .:/cube/conf
depends_on:
- cube_refresh_worker
- cubestore_router
- cubestore_worker_1
- cubestore_worker_2
cube_refresh_worker:
restart: always
image: cubejs/cube:latest
environment:
- CUBEJS_DB_TYPE=postgres
- CUBEJS_DB_HOST=host.docker.internal
- CUBEJS_DB_PORT=****
- CUBEJS_DB_NAME=****
- CUBEJS_DB_USER=****
- CUBEJS_DB_PASS=****
- CUBEJS_CUBESTORE_HOST=cubestore_router
- CUBEJS_API_SECRET=****
- CUBEJS_REFRESH_WORKER=true
volumes:
- .:/cube/conf
cubestore_router:
restart: always
image: cubejs/cubestore:latest
environment:
- CUBESTORE_WORKERS=cubestore_worker_1:10001,cubestore_worker_2:10002
- CUBESTORE_REMOTE_DIR=/cube/data
- CUBESTORE_META_PORT=9999
- CUBESTORE_SERVER_NAME=cubestore_router:9999
volumes:
- .cubestore:/cube/data
cubestore_worker_1:
restart: always
image: cubejs/cubestore:latest
environment:
- CUBESTORE_WORKERS=cubestore_worker_1:10001,cubestore_worker_2:10002
- CUBESTORE_SERVER_NAME=cubestore_worker_1:10001
- CUBESTORE_WORKER_PORT=10001
- CUBESTORE_REMOTE_DIR=/cube/data
- CUBESTORE_META_ADDR=cubestore_router:9999
volumes:
- .cubestore:/cube/data
depends_on:
- cubestore_router
cubestore_worker_2:
restart: always
image: cubejs/cubestore:latest
environment:
- CUBESTORE_WORKERS=cubestore_worker_1:10001,cubestore_worker_2:10002
- CUBESTORE_SERVER_NAME=cubestore_worker_2:10002
- CUBESTORE_WORKER_PORT=10002
- CUBESTORE_REMOTE_DIR=/cube/data
- CUBESTORE_META_ADDR=cubestore_router:9999
volumes:
- .cubestore:/cube/data
depends_on:
- cubestore_router
I'm trying to test the feasibility of running the necessary Cube services on one machine, locally first and on an Azure VM next. But it seems to be really struggling for no reason on my Macbook M2 Pro with 32 GB memory.