coolify icon indicating copy to clipboard operation
coolify copied to clipboard

[Bug]: Reverse proxy with Caddy and Docker compose doesn't work

Open AnzeKop opened this issue 1 year ago • 13 comments

Description

Caddy reverse proxy doesn't work with docker compose. Server builds and boots and runs on port 3000 correctly. Receving 502 errors.

Minimal Reproduction (if possible, example repository)

Just put any docker file without labels into a caddy setup on coolify. rever proxa doesn't work.

Here is my file

version: "3.8"

services:
  app:
    build: .
    command: npm run start:server
    ports:
      - "3000:3000"
    environment:
      - PORT=3000
      - NODE_ENV=production
      - DATABASE_URL=${DATABASE_URL}
      - REDIS_URL=${REDIS_URL}
      - MABI_TOKEN=${MABI_TOKEN}
      - HUBSPOT_API_KEY=${HUBSPOT_API_KEY}

  worker:
    build: .
    command: npm run start:worker
    environment:
      - NODE_ENV=production
      - DATABASE_URL=${DATABASE_URL}
      - REDIS_URL=${REDIS_URL}
      - JOB_CONCURRENCY=${JOB_CONCURRENCY:-1}
      - HUBSPOT_API_KEY=${HUBSPOT_API_KEY}

Exception or Error

No response

Version

v4.0.0-beta.323

Cloud?

  • [ ] Yes
  • [X] No

AnzeKop avatar Aug 15 '24 05:08 AnzeKop

Did you set any domain name for the app service?

andrasbacsai avatar Aug 15 '24 09:08 andrasbacsai

Yea tried with both autogenerated from the wildcard domain and setting it custom.

the labels it generated appeared correct for both trafeik ane caddy.

AnzeKop avatar Aug 15 '24 10:08 AnzeKop

I'm on v4.0.0-beta.323 of Coolify and I constantly use this feature...Caddy + Docker compose.

I've faced 502 errors before but been able to resolve. @AnzeKop could you confirm you're adding the port to the domain e.g. https://mydomain.com:3000 this applies if you are adding/configuring the domain at the service level. If you're doing it as a dynamic config, the format is "container-name:3000" where container-name refers to the auto generated container name created by Coolify.

Lastly, it helps to confirm the app is actually running and ready to handle requests (healthy) otherwise you can also run into 502 errors.

If you're still encountering issues, you could share what the "deployable compose file" contents are and any logs from Caddy proxy or the app itself (feel free to mask/hide the domain in your screenshots)

kmbuthia avatar Aug 21 '24 14:08 kmbuthia

This is my full docker compose with my postgrese setup, I tried it with different ports and caddy setups but still nothong works. The app runs healthy and it works but just the reverse proxy doesn't make it publicly accesable

FROM node:20

WORKDIR /app

COPY package*.json ./
COPY packages/ ./packages/
COPY . .

RUN npm ci
RUN npm run build
version: "3.8"

services:
  server:
    build: .
    image: midab-kova-hubspot-sync:latest
    command: npm run start:server
    ports:
      - "${PORT}:${PORT}"
    environment:
      - PORT=${PORT}
      - NODE_ENV=production
      - DATABASE_URL=postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
      - REDIS_URL=redis://redis:6379
      - MABI_TOKEN=${MABI_TOKEN}
      - HUBSPOT_API_KEY=${HUBSPOT_API_KEY}
    depends_on:
      - postgres
      - redis

  worker:
    image: midab-kova-hubspot-sync:latest
    command: npm run start:worker
    environment:
      - NODE_ENV=production
      - DATABASE_URL=postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB}
      - REDIS_URL=redis://redis:6379
      - JOB_CONCURRENCY=${JOB_CONCURRENCY:-1}
      - HUBSPOT_API_KEY=${HUBSPOT_API_KEY}
    depends_on:
      - postgres
      - redis

  postgres:
    image: postgres:15
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
    volumes:
      - postgres_data:/var/lib/postgresql/data

  redis:
    image: redis:6
    volumes:
      - redis_data:/data

volumes:
  postgres_data:
  redis_data:

AnzeKop avatar Aug 26 '24 11:08 AnzeKop

Hey @AnzeKop any chance you can show the labels being set by Coolify for this? You should be able to see them by clicking the "Show deployable compose" button when editing the docker-compose.yml via Coolify UI (screenshot of button attached)

Screenshot 2024-08-26 at 15 07 39

An example of what the labels could look like:

      - 'caddy_0.encode=zstd gzip'
      - 'caddy_0.handle_path.0_reverse_proxy={{upstreams 7000}}'
      - 'caddy_0.handle_path=/*'
      - caddy_0.header=-Server
      - 'caddy_0.try_files={path} /index.html /index.php'
      - 'caddy_0=https://yourdomainhere.com'
      - caddy_ingress_network=mc4wg4k

The key is on this line: - 'caddy_0.handle_path.0_reverse_proxy={{upstreams 7000}}' Make sure the value of the upstreams port matches what you have as the ${PORT} in your envs and docker-compose.yml file (the number 7000 refers to the port that the container is listening on, so in your case I guess it's gonna be 3000)

kmbuthia avatar Aug 26 '24 12:08 kmbuthia

Screenshot 2024-09-10 at 00 45 21

I am having the same issue when using a docker compose. The upstreams port is not showing after clicking Reload Compose File

njoguamos avatar Sep 09 '24 21:09 njoguamos

I set a Coolify magic environment variable and dropped my docker-compose port mapping.

Changed this:

services:
  myservice:
    image: <image> 
    environment:
      # my envs
    ports:
      - 8080:8080

to:

services:
  myservice:
    image: <image> 
    environment:
      - SERVICE_FQDN_MYSERVICE_8080
      # my envs

mikeU-1F45F avatar Nov 07 '24 16:11 mikeU-1F45F

Hey @mikeU-1F45F, I experienced the same problem and was trying out your fix, but still got no luck. For reference, here is my Dockerfile and docker-compose:

FROM python:3.10-bullseye

RUN apt-get update && apt-get install -y ffmpeg

WORKDIR /app

COPY packages/database packages/database
RUN pip install prisma
RUN prisma generate --schema="/app/packages/database/prisma/schema" --generator py

WORKDIR /app/apps/fastapi
COPY apps/fastapi/requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY apps/fastapi .

CMD ["uvicorn", "app.main:api_router", "--reload", "--host", "0.0.0.0", "--port", "8042"]
services:
  server:
    build:
      context: ./../..
      dockerfile: apps/fastapi/Dockerfile
    environment:
      - SERVICE_FQDN_FASTAPI=/
      - _APP_URL=$SERVICE_FQDN_FASTAPI
      - SERVICE_FQDN_FASTAPI_8042
    restart: always

I also tried other variants where I included the expose: field in both Dockerfile and docker-compose, but nothing seemed to work. I extensively read the Coolify magic variables but couldn't understand what's wrong.

For context I'm trying to build a monorepo that has a FastAPI server consuming a Prisma python client.

Thank you very much for your help.

To the Coolify maintainers, I can ssh into the instance and ping my server locally which works. Like @njoguamos, I'm using Caddy and seeing no number beside {{ upstream }}. I also have **other apps - a Next.js app and a Postgres db - that I can access via https and psql.

Here are the containers and their ports running in the instance:

docker ps
CONTAINER ID   IMAGE                                                               COMMAND                  CREATED              STATUS                 PORTS                                                                                                                       NAMES
c09932827e29   l804cgg8wgwooc88wokw8so8-server                                     "uvicorn app.main:ap…"   About a minute ago   Up About a minute      8042/tcp                                                                                                                    server-l804cgg8wgwooc88wokw8so8-064000780215
a7af8accc5c5   nginx:stable-alpine                                                 "/docker-entrypoint.…"   2 hours ago          Up 2 hours (healthy)   80/tcp, 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp                                                                           eo0kwk8wkko0so00c8s0ooss-proxy
6f025e3fbc53   postgres:16-alpine                                                  "docker-entrypoint.s…"   2 hours ago          Up 2 hours (healthy)   5432/tcp                                                                                                                    eo0kwk8wkko0so00c8s0ooss
57be739879f8   rkgocw8ccgkow8ggoskgg08g:1933a153fa632ef4d31c629d15e3b3a04c84e265   "/bin/bash -l -c 'np…"   4 hours ago          Up 4 hours             3000/tcp                                                                                                                    rkgocw8ccgkow8ggoskgg08g-024451920048
03c6309ab7e5   lucaslorentz/caddy-docker-proxy:2.8-alpine                          "/bin/caddy docker-p…"   4 hours ago          Up 4 hours             0.0.0.0:80->80/tcp, :::80->80/tcp, 0.0.0.0:443->443/tcp, :::443->443/tcp, 0.0.0.0:443->443/udp, :::443->443/udp, 2019/tcp   coolify-proxy
d4be47a8dfa1   ghcr.io/coollabsio/coolify:4.0.0-beta.370                           "/init"                  5 hours ago          Up 5 hours (healthy)   443/tcp, 8000/tcp, 9000/tcp, 0.0.0.0:8000->80/tcp, :::8000->80/tcp                                                          coolify
09bd817a927f   redis:7-alpine                                                      "docker-entrypoint.s…"   5 hours ago          Up 5 hours (healthy)   6379/tcp                                                                                                                    coolify-redis
d5d50dfe7590   ghcr.io/coollabsio/coolify-realtime:1.0.4                           "/bin/sh /soketi-ent…"   5 hours ago          Up 5 hours (healthy)   0.0.0.0:6001-6002->6001-6002/tcp, :::6001-6002->6001-6002/tcp                                                               coolify-realtime
26844c34e040   postgres:15-alpine                                                  "docker-entrypoint.s…"   5 hours ago          Up 5 hours (healthy)   5432/tcp

I want to say the Coolify magic environment will help, but I couldn't get it to work yet. Any help is much appreciated. Thank you very much.

scottsus avatar Nov 24 '24 06:11 scottsus

I finally solved it.

Unlike Nixpacks, for docker compose you actually need to specify the port to the domain, as per @kmbuthia's words. Here's an example

Image

Notice Domains -> Domains for server. There's not much to do with exposing ports in the Dockerfile or docker compose, and Coolify magic variables did not work for me.

This took me so long to debug but glad to have found the solution 🚀

scottsus avatar Nov 24 '24 08:11 scottsus

@scottsus you're a lifesaver! 🙌

Was banging my head against the wall with this Caddy + Docker compose issue. The port in domain thing (domain.com:8042) totally fixed it for me too.

Spent too long messing with Dockerfile ports and those Coolify vars... Your screenshot showing exactly where to add the port in the Coolify UI domain settings made it very clear.

Thanks for sharing the fix! 💯

alicantorun avatar Dec 05 '24 11:12 alicantorun

Yep the magic variable do not really work at all. I can't expose it for some reason. I tried the domain approach, it finally works. but when the service is listen to localhost, it won't work.

This is my yaml

version: '3.8'
services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
    environment:
      - SERVICE_FQDN_APP_8000
    expose:
      - "8000"
    restart: unless-stopped
    entrypoint: ["python", "-u", "/rp_handler.py", "--rp_serve_api"]

Generated yaml

services:
  app:
    build:
      context: .
      dockerfile: Dockerfile
    environment:
      COOLIFY_BRANCH: '"main"'
      COOLIFY_RESOURCE_UUID: ro8gw80ss4cc0w4s8gcck480
      COOLIFY_CONTAINER_NAME: app-ro8gw80ss4cc0w4s8gcck480-091717823643
      COOLIFY_URL: 'https://wockc00g84o4kwoksgksww8w.xxxx.com:8000'
      COOLIFY_FQDN: 'wockc00g84o4kwoksgksww8w.xxxx.com:8000'
    expose:
      - '8000'
    restart: unless-stopped
    entrypoint:
      - python
      - '-u'
      - /rp_handler.py
      - '--rp_serve_api'
    container_name: app-ro8gw80ss4cc0w4s8gcck480-091717823643
    labels:
      - coolify.managed=true
      - coolify.version=4.0.0-beta.393
      - coolify.applicationId=3
      - coolify.type=application
      - coolify.name=app-ro8gw80ss4cc0w4s8gcck480-091717823643
      - coolify.resourceName=i0ock4s0480swssw44s0400g
      - coolify.projectName=my-first-project
      - coolify.serviceName=i0ock4s0480swssw44s0400g
      - coolify.environmentName=production
      - coolify.pullRequestId=0
      - traefik.enable=true
      - traefik.http.middlewares.gzip.compress=true
      - traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https
      - traefik.http.routers.http-0-ro8gw80ss4cc0w4s8gcck480-app.entryPoints=http
      - traefik.http.routers.http-0-ro8gw80ss4cc0w4s8gcck480-app.middlewares=redirect-to-https
      - 'traefik.http.routers.http-0-ro8gw80ss4cc0w4s8gcck480-app.rule=Host(`wockc00g84o4kwoksgksww8w.oc`) && PathPrefix(`/`)'
      - traefik.http.routers.http-0-ro8gw80ss4cc0w4s8gcck480-app.service=http-0-ro8gw80ss4cc0w4s8gcck480-app
      - traefik.http.routers.https-0-ro8gw80ss4cc0w4s8gcck480-app.entryPoints=https
      - traefik.http.routers.https-0-ro8gw80ss4cc0w4s8gcck480-app.middlewares=gzip
      - 'traefik.http.routers.https-0-ro8gw80ss4cc0w4s8gcck480-app.rule=Host(`wockc00g84o4kwoksgksww8w.oc`) && PathPrefix(`/`)'
      - traefik.http.routers.https-0-ro8gw80ss4cc0w4s8gcck480-app.service=https-0-ro8gw80ss4cc0w4s8gcck480-app
      - traefik.http.routers.https-0-ro8gw80ss4cc0w4s8gcck480-app.tls.certresolver=letsencrypt
      - traefik.http.routers.https-0-ro8gw80ss4cc0w4s8gcck480-app.tls=true
      - traefik.http.services.http-0-ro8gw80ss4cc0w4s8gcck480-app.loadbalancer.server.port=8000
      - traefik.http.services.https-0-ro8gw80ss4cc0w4s8gcck480-app.loadbalancer.server.port=8000
      - 'caddy_0.encode=zstd gzip'
      - 'caddy_0.handle_path.0_reverse_proxy={{upstreams 8000}}'
      - 'caddy_0.handle_path=/*'
      - caddy_0.header=-Server
      - 'caddy_0.try_files={path} /index.html /index.php'
      - 'caddy_0=https://wockc00g84o4kwoksgksww8w.xxx.com'
      - caddy_ingress_network=ro8gw80ss4cc0w4s8gcck480
    networks:
      ro8gw80ss4cc0w4s8gcck480: null
volumes: {  }
networks:
  ro8gw80ss4cc0w4s8gcck480:
    name: ro8gw80ss4cc0w4s8gcck480
    external: true
configs: {  }
secrets: {  }

mai1015 avatar Feb 18 '25 18:02 mai1015

I have a similar issue and I start to tear my hair out, here is my setup:

Dockerfile

FROM oven/bun:latest as base

WORKDIR /app

# Install dependencies
COPY package.json bun.lock ./
RUN bun install --frozen-lockfile

# Copy source code
COPY . .

# Expose the port
EXPOSE 3000

# Set environment variables to ensure the app listens on all interfaces
ENV HOST=0.0.0.0
ENV PORT=3000

# Start the application
ENTRYPOINT ["bun", "run", "src/index.ts"]

Compose

version: '3.8'
services:
# other services....
  resize-it:
    image: 'ghcr.io/karnak19/resize-it:latest'
    expose:
      - '3000'
    environment:
      - SERVICE_FQDN_RESIZE_IT_3000
      - MINIO_ENDPOINT=minio
      - MINIO_PORT=9000
      - MINIO_USE_SSL=false
      - 'MINIO_ACCESS_KEY=${MINIO_ACCESS_KEY:-minioadmin}'
      - 'MINIO_SECRET_KEY=${MINIO_SECRET_KEY:-minioadmin}'
      - 'MINIO_BUCKET=${MINIO_BUCKET:-images}'
    depends_on:
      minio:
        condition: service_healthy
      dragonfly:
        condition: service_healthy
    restart: unless-stopped
    healthcheck:
      test:
        - CMD
        - curl
        - '-f'
        - 'http://localhost:3000/health'
      interval: 10s
      timeout: 5s
      retries: 3
      start_period: 15s

volumes:
  minio_data:
    name: resize-it-minio-data
  dragonfly_data:
    name: resize-it-dragonfly-data

Deployable compose

services:
 # other services...
  resize-it:
    image: 'ghcr.io/karnak19/resize-it:latest'
    expose:
      - '3000'
    environment:
      MINIO_ENDPOINT: minio
      MINIO_PORT: '9000'
      MINIO_USE_SSL: 'false'
      MINIO_ACCESS_KEY: '${MINIO_ACCESS_KEY:-minioadmin}'
      MINIO_SECRET_KEY: '${MINIO_SECRET_KEY:-minioadmin}'
      MINIO_BUCKET: '${MINIO_BUCKET:-images}'
      ENABLE_API_KEY_AUTH: 'false'
      CORS_ALLOWED_ORIGINS: '*'
      DRAGONFLY_HOST: dragonfly
      DRAGONFLY_PORT: '6379'
      DRAGONFLY_ENABLED: 'true'
      COOLIFY_RESOURCE_UUID: qk08scw0o0sgc80cwc4o0so4
      COOLIFY_CONTAINER_NAME: resize-it-qk08scw0o0sgc80cwc4o0so4
      COOLIFY_URL: 'https://assets.airsoftmarket.fr:3000'
      COOLIFY_FQDN: 'assets.airsoftmarket.fr:3000'
    depends_on:
      minio:
        condition: service_healthy
      dragonfly:
        condition: service_healthy
    restart: unless-stopped
    healthcheck:
      test:
        - CMD
        - curl
        - '-f'
        - 'http://localhost:3000/health'
      interval: 10s
      timeout: 5s
      retries: 3
      start_period: 15s
    container_name: resize-it-qk08scw0o0sgc80cwc4o0so4
    labels:
      - coolify.managed=true
      - coolify.version=4.0.0-beta.397
      - coolify.serviceId=49
      - coolify.type=service
      - coolify.name=resize-it-qk08scw0o0sgc80cwc4o0so4
      - coolify.resourceName=service-qk08scw0o0sgc80cwc4o0so4
      - coolify.projectName=airsoft-market
      - coolify.serviceName=resize-it
      - coolify.environmentName=production
      - coolify.pullRequestId=0
      - coolify.service.subId=154
      - coolify.service.subType=application
      - coolify.service.subName=resize-it
      - traefik.enable=true
      - traefik.http.middlewares.gzip.compress=true
      - traefik.http.middlewares.redirect-to-https.redirectscheme.scheme=https
      - traefik.http.routers.http-0-qk08scw0o0sgc80cwc4o0so4-resize-it.entryPoints=http
      - traefik.http.routers.http-0-qk08scw0o0sgc80cwc4o0so4-resize-it.middlewares=redirect-to-https
      - 'traefik.http.routers.http-0-qk08scw0o0sgc80cwc4o0so4-resize-it.rule=Host(`assets.airsoftmarket.fr`) && PathPrefix(`/`)'
      - traefik.http.routers.http-0-qk08scw0o0sgc80cwc4o0so4-resize-it.service=http-0-qk08scw0o0sgc80cwc4o0so4-resize-it
      - traefik.http.routers.https-0-qk08scw0o0sgc80cwc4o0so4-resize-it.entryPoints=https
      - traefik.http.routers.https-0-qk08scw0o0sgc80cwc4o0so4-resize-it.middlewares=gzip
      - 'traefik.http.routers.https-0-qk08scw0o0sgc80cwc4o0so4-resize-it.rule=Host(`assets.airsoftmarket.fr`) && PathPrefix(`/`)'
      - traefik.http.routers.https-0-qk08scw0o0sgc80cwc4o0so4-resize-it.service=https-0-qk08scw0o0sgc80cwc4o0so4-resize-it
      - traefik.http.routers.https-0-qk08scw0o0sgc80cwc4o0so4-resize-it.tls.certresolver=letsencrypt
      - traefik.http.routers.https-0-qk08scw0o0sgc80cwc4o0so4-resize-it.tls=true
      - traefik.http.services.http-0-qk08scw0o0sgc80cwc4o0so4-resize-it.loadbalancer.server.port=3000
      - traefik.http.services.https-0-qk08scw0o0sgc80cwc4o0so4-resize-it.loadbalancer.server.port=3000
      - 'caddy_0.encode=zstd gzip'
      - 'caddy_0.handle_path.0_reverse_proxy={{upstreams 3000}}'
      - 'caddy_0.handle_path=/*'
      - caddy_0.header=-Server
      - 'caddy_0.try_files={path} /index.html /index.php'
      - 'caddy_0=https://assets.airsoftmarket.fr'
      - caddy_ingress_network=qk08scw0o0sgc80cwc4o0so4
    networks:
      qk08scw0o0sgc80cwc4o0so4: null
volumes:
  minio_data:
    name: resize-it-minio-data
  dragonfly_data:
    name: resize-it-dragonfly-data
  qk08scw0o0sgc80cwc4o0so4_minio-data:
    name: qk08scw0o0sgc80cwc4o0so4_minio-data
  qk08scw0o0sgc80cwc4o0so4_dragonfly-data:
    name: qk08scw0o0sgc80cwc4o0so4_dragonfly-data
networks:
  qk08scw0o0sgc80cwc4o0so4:
    name: qk08scw0o0sgc80cwc4o0so4
    external: true
configs: {  }
secrets: {  }

Domain is properly set to the correct port Image

And still the 404 not found page... 😭

Did I miss something ?

Karnak19 avatar Mar 02 '25 23:03 Karnak19

same here, adding the internal port worked for me. for example Domains: https://www.url.de:3000,https//URL.de:3000

tuke307 avatar Mar 20 '25 14:03 tuke307

Did anyone find any fix? Adding port is not working for me

gauravnepal3 avatar Apr 03 '25 14:04 gauravnepal3

any fix ?

sostenesapollo avatar Sep 24 '25 04:09 sostenesapollo

We added a dedicated Bad Gateway Troubleshooting Guide to our documentation. Please give this a read when you face this error. This fixes it in most cases.

TLDR;

  1. Make sure your container labels include the appropriate port configurations:
# Traefik
- traefik.http.services.<router_name>.loadbalancer.server.port=3000
# Caddy
- 'caddy_0.handle_path.0_reverse_proxy={{upstreams 3000}}'

This is automatically added, when the port is being added at the end of the domain into the domain field as mentioned above. The SERVICE_URL env variable (previously SERVIE_FQDN) auto fills the domain field with that port at the end. That won't make a difference when you already had the domain field filled with your own domain, so make sure to always double check manually that you have the correct port configured.

  1. Make sure your app is listening to the appropriate interfaces.

Some apps only run on localhost or 127.0.0.1. This means it won't accept request from outside the internal network (the internet). Often you can pass the hostname in an env variable or in the start command. E.g. in nextjs: next start -H 0.0.0.0

Cinzya avatar Sep 24 '25 11:09 Cinzya