[Bug]: Supabase containers keep restarting due to authentication-related error
Description
When attempting to deploy Supabase using Coolify v4.0.0-beta.306, the process fails, and the logs of the containers indicate an authentication-related error.
Do note that it works fine on v4.0.0-beta.297 except for that fact that:
Minio Createbucketcontainer fails to run and exits.Supabase RestandRealtime Devshowsrunning (unhealthy).
Minimal Reproduction (if possible, example repository)
- Upgrade to Coolify v4.0.0-beta.306.
- Attempt to deploy Supabase.
- Observe the failure in the deployment process. Several containers would keep restarting.
- Check logs of the failed containers.
Exception or Error
No response
Version
v4.0.0-beta.306
I have the same error. And some user on Discord (Moritz) also seems to have it.
Same here. Itβs been one thing or another with Supabase on the last few beta releases
There was only one small change (since 297) to the template which I wouldn't have expected to cause the issue:
https://github.com/coollabsio/coolify/compare/v4.0.0-beta.297...v4.0.0-beta.306 search supabase
So I can only presume there's some issue with parsing/injecting env variables?
Same for me, supabase doesn't work due to the supabase-db module. The supabase_admin won't be created, I think
For me the supabase-db boots, but the supabase-analytics doesn't and most of the containers depend on supabase-analytics, the logs say that the password for supabase_admin is incorrect, which is causing supabase-analytics to crash because the migrations can't run. That was my experience yesterday evening at least.
If I were a betting man, I'd say it was this commit: https://github.com/coollabsio/coolify/commit/1266810c4d8edfd2522ba8a7ab703f522c0e34cd
For me the
supabase-dbboots, but thesupabase-analyticsdoesn't and most of the containers depend onsupabase-analytics, the logs say that the password forsupabase_adminis incorrect, which is causingsupabase-analyticsto crash because the migrations can't run. That was my experience yesterday evening at least.
Yes, because the supabase_admin user won't be created. You can see this inside of the supabase-db logs
If I were a betting man, I'd say it was this commit: 1266810
No, it already did'nt work on Monday
Fair, I read through it in more detail and if I were a betting man, I'd have lost money! Haha Double checked the env's passed to the containers and it's correct. So my hypothesis was incorrect.
I am also experiencing this.
I've figured out the issue, can replicate and mitigate.
Coolify is setting the POSTGRES_HOST parameter to the POSTGRES_HOST environment parameter even though it has a hard coded value set.
You can resolve the issue by setting POSTGRES_HOST to some other value like POSTGRES_HOSTNAME, change all instances of POSTGRES_HOST parameter inside the docker-compose and then delete the POSTGRES_HOST after saving.
Issue:
Postgres runs the init scripts before the network connection is ready by connecting directly to the socket, which is why for the POSTGRES_HOST=/var/run/postgresql env variable on supabase-db it's set to a path.
When the env is incorrectly overridden the value is set to supabase-db which resolves to the docker network which isn't initialised and is also not able to use the local access root.
I might still be on to win my bet. π
Refs:
https://github.com/docker-library/postgres/issues/941 https://raw.githubusercontent.com/docker-library/postgres/master/15/bullseye/docker-entrypoint.sh https://github.com/supabase/postgres/blob/develop/migrations/db/migrate.sh
I've figured out the issue, can replicate and mitigate.
@Mortalife seems you are right. I was guessing maybe other services are loading sooner than DB but I believe it should be an environment conflict as you mentioned. I hope supabase make it's deployment process more robust in future. it's a little tricky now.
I've figured out the issue, can replicate and mitigate.
@Mortalife seems you are right. I was guessing maybe other services are loading sooner than DB but I believe it should be an environment conflict as you mentioned. I hope supabase make it's deployment process more robust in future. it's a little tricky now.
How would they sell their cloud services if self hosting was that easy? It's just marketing and it has to be somehow possible. But they don't want the mass to self host supabase..
I am having same issues too, even after removing analytics service
Error: FATAL: 28P01: password authentication failed for user "supabase_admin"
@Mortalife solution works! Thank you!
@Mortalife This worked for me as well, thank you for the fix!
The fix works for me aswell, but "Minio Createbucket" does not start. Did that work for you with this fix, @Torwent & @agalev ?
The fix works for me aswell, but "Minio Createbucket" does not start. Did that work for you with this fix, @Torwent & @agalev ?
It didn't start before this issue.
To clarify, it shouldn't be running. It runs once to ensure the mimo server has the default bucket created that's used by the storage server.
https://github.com/coollabsio/coolify/blob/main/templates/compose/supabase.yaml#L1067-L1071
It's creates the stub bucket, and then proceeds to exit. It has a no restart set.
The stub bucket is used by the storage server here: https://github.com/coollabsio/coolify/blob/main/templates/compose/supabase.yaml#L1104
That workaround seemed to work for me, however supabase-rest is still unhealthy and in the API Docs of the dashboard, says public isn't accessible ...
That workaround seemed to work for me, however supabase-rest is still unhealthy and in the API Docs of the dashboard, says public isn't accessible ...
I don't experience that problem. I would double check you've replaced all of the POSTGRES_HOST variable instances and there aren't any extra spaces etc where there shouldn't be. If it still remains, it might be worth removing the supabase db volume and restarting.
@Mortalife do you mind making that a pull request? I mean is there any other configuration that must be considered or just this hard-coded POSTGRES_HOST in postgresql was the probelm? if so maybe we can make a PR and make this issue as fixed?
I think I'd rather the env variables be correctly parsed than putting a PR up for this work around. PR don't seem to be approved with much velocity so it won't change things immediately regardless.
I understand. personally I had much difficulties deploy supabase instances as separate projects. coolify at least made it easy. and on the other hand supabase also is under development so maybe we have a lot of breaking coming forward.
The fix works for me aswell, but "Minio Createbucket" does not start. Did that work for you with this fix, @Torwent & @agalev ?
I'm pretty sure that's not meant to be running. Runs once the very first time you start things up to create the minIO bucket and never runs again AFAIK
Hello @Mortalife and sorry to bother you.
I just ran into this issue and found about your solution.
Could you please clarify a bit what needs to be changed ? I don't understand where and what values are causing the issue.
As I understood, in the .env file, I need to add a new parameter called POSTGRES_HOSTNAME with supabase-dbas its value. And replace all iterations of POSTGRES_HOST in the docker-compose .yml file by POSTGRES_HOSTNAME ? Am I right or missed the point ?
Hello @Mortalife and sorry to bother you.
I just ran into this issue and found about your solution.
Could you please clarify a bit what needs to be changed ? I don't understand where and what values are causing the issue.
As I understood, in the .env file, I need to add a new parameter called
POSTGRES_HOSTNAMEwithsupabase-dbas its value. And replace all iterations ofPOSTGRES_HOSTin the docker-compose .yml file byPOSTGRES_HOSTNAME? Am I right or missed the point ?
Correct, and then once you've done that, remove POSTGRES_HOST from the .env then restart.
Sorry again, this is surely an error between the chair and the keyboard, but my analytics service is still failing to start. Due to that password authentication failed for user "supabase_admin" errro from the supabase-analytics service.
Here is my docker-compose.yml file :
services:
supabase-kong:
image: 'kong:2.8.1'
entrypoint: 'bash -c ''eval "echo \"$$(cat ~/temp.yml)\"" > ~/kong.yml && /docker-entrypoint.sh kong docker-start'''
depends_on:
supabase-analytics:
condition: service_healthy
environment:
- SERVICE_FQDN_SUPABASEKONG
- 'JWT_SECRET=${SERVICE_PASSWORD_JWT}'
- KONG_DATABASE=off
- KONG_DECLARATIVE_CONFIG=/home/kong/kong.yml
- 'KONG_DNS_ORDER=LAST,A,CNAME'
- 'KONG_PLUGINS=request-transformer,cors,key-auth,acl,basic-auth'
- KONG_NGINX_PROXY_PROXY_BUFFER_SIZE=160k
- 'KONG_NGINX_PROXY_PROXY_BUFFERS=64 160k'
- 'SUPABASE_ANON_KEY=${SERVICE_SUPABASEANON_KEY}'
- 'SUPABASE_SERVICE_KEY=${SERVICE_SUPABASESERVICE_KEY}'
- 'DASHBOARD_USERNAME=${SERVICE_USER_ADMIN}'
- 'DASHBOARD_PASSWORD=${SERVICE_PASSWORD_ADMIN}'
volumes:
-
type: bind
source: ./volumes/api/kong.yml
target: /home/kong/temp.yml
supabase-studio:
image: 'supabase/studio:20240514-6f5cabd'
healthcheck:
test:
- CMD
- node
- '-e'
- "require('http').get('http://127.0.0.1:3000/api/profile', (r) => {if (r.statusCode !== 200) process.exit(1); else process.exit(0); }).on('error', () => process.exit(1))"
timeout: 5s
interval: 5s
retries: 3
depends_on:
supabase-analytics:
condition: service_healthy
environment:
- HOSTNAME=0.0.0.0
- 'STUDIO_PG_META_URL=http://supabase-meta:8080'
- 'POSTGRES_PASSWORD=${SERVICE_PASSWORD_POSTGRES}'
- 'DEFAULT_ORGANIZATION_NAME=${STUDIO_DEFAULT_ORGANIZATION:-Default Organization}'
- 'DEFAULT_PROJECT_NAME=${STUDIO_DEFAULT_PROJECT:-Default Project}'
- 'SUPABASE_URL=http://supabase-kong:8000'
- 'SUPABASE_PUBLIC_URL=${SERVICE_FQDN_SUPABASEKONG}'
- 'SUPABASE_ANON_KEY=${SERVICE_SUPABASEANON_KEY}'
- 'SUPABASE_SERVICE_KEY=${SERVICE_SUPABASESERVICE_KEY}'
- 'AUTH_JWT_SECRET=${SERVICE_PASSWORD_JWT}'
- 'LOGFLARE_API_KEY=${SERVICE_PASSWORD_LOGFLARE}'
- 'LOGFLARE_URL=http://supabase-analytics:4000'
- NEXT_PUBLIC_ENABLE_LOGS=true
- NEXT_ANALYTICS_BACKEND_PROVIDER=postgres
supabase-db:
image: 'supabase/postgres:15.1.1.41'
healthcheck:
test: 'pg_isready -U postgres -h 127.0.0.1'
interval: 5s
timeout: 5s
retries: 10
depends_on:
supabase-vector:
condition: service_healthy
command:
- postgres
- '-c'
- config_file=/etc/postgresql/postgresql.conf
- '-c'
- log_min_messages=fatal
restart: unless-stopped
environment:
- POSTGRES_HOST=/var/run/postgresql
- 'PGPORT=${POSTGRES_PORT:-5432}'
- 'POSTGRES_PORT=${POSTGRES_PORT:-5432}'
- 'PGPASSWORD=${SERVICE_PASSWORD_POSTGRES}'
- 'POSTGRES_PASSWORD=${SERVICE_PASSWORD_POSTGRES}'
- 'PGDATABASE=${POSTGRES_DB:-postgres}'
- 'POSTGRES_DB=${POSTGRES_DB:-postgres}'
- 'JWT_SECRET=${SERVICE_PASSWORD_JWT}'
- 'JWT_EXP=${JWT_EXPIRY:-3600}'
volumes:
- 'supabase-db-data:/var/lib/postgresql/data'
-
type: bind
source: ./volumes/db/realtime.sql
target: /docker-entrypoint-initdb.d/migrations/99-realtime.sql
-
type: bind
source: ./volumes/db/webhooks.sql
target: /docker-entrypoint-initdb.d/init-scripts/98-webhooks.sql
-
type: bind
source: ./volumes/db/roles.sql
target: /docker-entrypoint-initdb.d/init-scripts/99-roles.sql
-
type: bind
source: ./volumes/db/jwt.sql
target: /docker-entrypoint-initdb.d/init-scripts/99-jwt.sql
-
type: bind
source: ./volumes/db/logs.sql
target: /docker-entrypoint-initdb.d/migrations/99-logs.sql
- 'supabase-db-config:/etc/postgresql-custom'
supabase-analytics:
image: 'supabase/logflare:1.4.0'
healthcheck:
test:
- CMD
- curl
- 'http://127.0.0.1:4000/health'
timeout: 5s
interval: 5s
retries: 10
restart: unless-stopped
depends_on:
supabase-db:
condition: service_healthy
environment:
- LOGFLARE_NODE_HOST=127.0.0.1
- DB_USERNAME=supabase_admin
- 'DB_DATABASE=${POSTGRES_DB:-postgres}'
- 'DB_HOSTNAME=${POSTGRES_HOSTNAME:-supabase-db}'
- 'DB_PORT=${POSTGRES_PORT:-5432}'
- 'DB_PASSWORD=${SERVICE_PASSWORD_POSTGRES}'
- DB_SCHEMA=_analytics
- 'LOGFLARE_API_KEY=${SERVICE_PASSWORD_LOGFLARE}'
- LOGFLARE_SINGLE_TENANT=true
- LOGFLARE_SINGLE_TENANT_MODE=true
- LOGFLARE_SUPABASE_MODE=true
- LOGFLARE_MIN_CLUSTER_SIZE=1
- 'POSTGRES_BACKEND_URL=postgresql://supabase_admin:${SERVICE_PASSWORD_POSTGRES}@${POSTGRES_HOSTNAME:-supabase-db}:${POSTGRES_PORT:-5432}/${POSTGRES_DB:-postgres}'
- POSTGRES_BACKEND_SCHEMA=_analytics
- LOGFLARE_FEATURE_FLAG_OVERRIDE=multibackend=true
And here is my .env file :
ADDITIONAL_REDIRECT_URLS=
API_EXTERNAL_URL=http://supabase-kong:8000
DISABLE_SIGNUP=false
ENABLE_ANONYMOUS_USERS=false
ENABLE_EMAIL_AUTOCONFIRM=false
ENABLE_EMAIL_SIGNUP=true
ENABLE_PHONE_AUTOCONFIRM=true
ENABLE_PHONE_SIGNUP=true
FUNCTIONS_VERIFY_JWT=false
IMGPROXY_ENABLE_WEBP_DETECTION=true
JWT_EXPIRY=3600
MAILER_SUBJECTS_CONFIRMATION=
MAILER_SUBJECTS_EMAIL_CHANGE=
MAILER_SUBJECTS_INVITE=
MAILER_SUBJECTS_MAGIC_LINK=
MAILER_SUBJECTS_RECOVERY=
MAILER_TEMPLATES_CONFIRMATION=
MAILER_TEMPLATES_EMAIL_CHANGE=
MAILER_TEMPLATES_INVITE=
MAILER_TEMPLATES_MAGIC_LINK=
MAILER_TEMPLATES_RECOVERY=
MAILER_URLPATHS_CONFIRMATION=/auth/v1/verify
MAILER_URLPATHS_EMAIL_CHANGE=/auth/v1/verify
MAILER_URLPATHS_INVITE=/auth/v1/verify
MAILER_URLPATHS_RECOVERY=/auth/v1/verify
PGRST_DB_SCHEMAS=public
POSTGRES_DB=postgres
POSTGRES_HOSTNAME=supabase-db
POSTGRES_PORT=5432
SECRET_PASSWORD_REALTIME=
SERVICE_FQDN_SUPABASEKONG=http://supabasekong-d4kgsgk.xxx.xxx.xxx.xxx.sslip.io/
SMTP_ADMIN_EMAIL=
SMTP_HOST=
SMTP_PASS=
SMTP_PORT=587
SMTP_SENDER_NAME=
SMTP_USER=
STUDIO_DEFAULT_ORGANIZATION=Default Organization
STUDIO_DEFAULT_PROJECT=Default Project
As you recommanded, I removed the POSTGRES_HOST from the .env file and added POSTGRES_HOSTNAME. And I changed the use of POSTGRES_HOST in docker-compose.yml to POSTGRES_HOSTNAME.
Also, here is what I got when I tried to manually log into postgres inside the supabase-db service :
$ psql -U supabase_admin -W
Password:
psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: password authentication failed for user "supabase_admin"
@deozza Try stopping the stack, removing the associated _supabase-db-data volume and restart the stack.
You can find the volume by running docker volume ls look for the one which has <the_random_stack_string>_supabase-db-data and then running docker volume rm <name>.
For example my random stack string that is before my url etc is rwkg84s my volume is rwkg84s_supabase-db-data so I would run docker volume rm rwkg84s_supabase-db-data
Once you've done that you should be able to start the service again and hopefully the migrations will run correctly.
This worked perfectly for me. For future reference, here are the steps I did to resolve :
- first deploy the stack via coolify
- wait for the deployment to fail
- stop all containers
- in the environment variable panel, or directly in the .env file in the server :
a. replace the
POSTGRES_HOSTvariable withPOSTGRES_HOSTNAME - in the service stack panel, click on the edit compose file or directly in the docker-compose.yml file in the server
a. replace all use of
POSTGRES_HOSTvariable withPOSTGRES_HOSTNAME - on the server, use
docker compose down --volumesto remove the old db config - deploy the stack again
- it should work
That workaround seemed to work for me, however supabase-rest is still unhealthy and in the API Docs of the dashboard, says public isn't accessible ...
I don't experience that problem. I would double check you've replaced all of the
POSTGRES_HOSTvariable instances and there aren't any extra spaces etc where there shouldn't be. If it still remains, it might be worth removing the supabase db volume and restarting.
Tried removing the volumes after double checking the host values... still Rest is listed as unhealthy and also says public schema is still not available for me