[Bug]: duplicate key value violates unique constraint "unique_constraints_pkey" on configuring step
Preflight Checklist
- [x] I could not find a solution in the documentation, the existing issues or discussions
- [x] I have joined the ZITADEL chat
Environment
Self-hosted
Version
2.64.1
Database
PostgreSQL
Database Version
16-alpine
Describe the problem caused by this bug
Self-hosted ZITADEL deployed with Docker randomly throws migration exception on configuring step:
msg="migration failed" caller="/home/runner/work/zitadel/zitadel/cmd/setup/setup.go:255" error="ID=V3-DKcYh Message=Errors.Instance.Domain.AlreadyExists Parent=(ERROR: duplicate key value violates unique constraint \"unique_constraints_pkey\" (SQLSTATE 23505))" name=03_default_instance
Then the ZITADEL container keeps restarting.
After several attempts the container configures successfully.
To reproduce
- Run a Docker stack with ZITADEL and PostreSQL services (docker compose up -d, configuration is below);
- Check the ZITADEL container logs.
Screenshots
No response
Expected behavior
ZITADEL is configured and successfully running.
Operating System
Device: VPS x86_64 OS: Ubuntu 22.04.5 LTS Docker version 27.3.1, build ce12230
Relevant Configuration
docker-compose.yaml:
zitadel:
user: "${UID:-1000}"
restart: 'always'
networks:
- 'zitadel'
image: 'ghcr.io/zitadel/zitadel:v2.64.1'
command: 'start-from-init --masterkeyFromEnv'
env_file:
- ./zitadel.env
depends_on:
db:
condition: 'service_healthy'
ports:
- '8080:8080'
volumes:
- ./machinekey:/machinekey
db:
restart: 'always'
image: postgres:16-alpine
env_file:
- ./db.env
networks:
- 'zitadel'
healthcheck:
test: ["CMD-SHELL", "pg_isready", "-d", "zitadel", "-U", "postgres", "-c", "log_statement=all"]
interval: '10s'
timeout: '30s'
retries: 5
start_period: '20s'
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
networks:
zitadel:
zitadel.env:
ZITADEL_LOG_LEVEL=debug
ZITADEL_MASTERKEY=hidden_key
ZITADEL_DATABASE_POSTGRES_HOST: db
ZITADEL_DATABASE_POSTGRES_PORT: 5432
ZITADEL_DATABASE_POSTGRES_DATABASE: zitadel
ZITADEL_DATABASE_POSTGRES_USER_USERNAME: user
ZITADEL_DATABASE_POSTGRES_USER_PASSWORD: password
ZITADEL_DATABASE_POSTGRES_USER_SSL_MODE: disable
ZITADEL_DATABASE_POSTGRES_ADMIN_USERNAME: postgres
ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD: postgres
ZITADEL_DATABASE_POSTGRES_ADMIN_SSL_MODE: disable
ZITADEL_FIRSTINSTANCE_MACHINEKEYPATH: /machinekey/zitadel-admin-sa.json
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_USERNAME: zitadel-admin-sa
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_NAME: Admin
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINEKEY_TYPE: 1
ZITADEL_EXTERNALSECURE=true
ZITADEL_TLS_ENABLED="false"
ZITADEL_EXTERNALPORT=443
ZITADEL_EXTERNALDOMAIN=auth.domain.com
db.env contains user and password environment variables.
Additional Context
No response
FYI - Seeing the same behaviour here locally with the sample docker-compose file as well with v2.65.1.
Note - if you comment out the ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE* env properties it will allow the instance to start normally, with the caveat that you don't get a machine account to work with to administer the instance.
Discovered that the issue is that zitadel cannot write the json file to the volume. Look for "permission denied" type errors in the logs and ensure the "machinekey" directory is writable by the container.
Same error but no "permission denied" errors here...
full logs
time="2024-12-23T19:00:22Z" level=info msg="initialization started" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/init.go:75"
2024-12-23T19:00:22.151331928Z time="2024-12-23T19:00:22Z" level=info msg="verify user" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_user.go:40" username=zitadel
2024-12-23T19:00:22.156973943Z time="2024-12-23T19:00:22Z" level=info msg="verify database" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_database.go:40" database=zitadel
2024-12-23T19:00:22.163850982Z time="2024-12-23T19:00:22Z" level=info msg="verify grant" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_grant.go:35" database=zitadel user=zitadel
2024-12-23T19:00:22.177598845Z time="2024-12-23T19:00:22Z" level=info msg="verify zitadel" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:86" database=zitadel
2024-12-23T19:00:22.194478863Z time="2024-12-23T19:00:22Z" level=info msg="verify system" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:47"
2024-12-23T19:00:22.196201108Z time="2024-12-23T19:00:22Z" level=info msg="verify encryption keys" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:52"
2024-12-23T19:00:22.196687142Z time="2024-12-23T19:00:22Z" level=info msg="verify projections" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:57"
2024-12-23T19:00:22.198172901Z time="2024-12-23T19:00:22Z" level=info msg="verify eventstore" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:62"
2024-12-23T19:00:22.199421195Z time="2024-12-23T19:00:22Z" level=info msg="verify events tables" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:67"
2024-12-23T19:00:22.204445523Z time="2024-12-23T19:00:22Z" level=info msg="verify system sequence" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:72"
2024-12-23T19:00:22.204609234Z time="2024-12-23T19:00:22Z" level=info msg="verify unique constraints" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:77"
2024-12-23T19:00:22.329395385Z time="2024-12-23T19:00:22Z" level=info msg="setup started" caller="/home/runner/work/zitadel/zitadel/cmd/setup/setup.go:101"
2024-12-23T19:00:22.370473934Z time="2024-12-23T19:00:22Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:43" name=14_events_push
2024-12-23T19:00:22.387093090Z time="2024-12-23T19:00:22Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:43" name=40_init_push_func
2024-12-23T19:00:22.396695662Z time="2024-12-23T19:00:22Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:43" name=01_tables
2024-12-23T19:00:22.407545900Z time="2024-12-23T19:00:22Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:43" name=02_assets
2024-12-23T19:00:22.423560849Z time="2024-12-23T19:00:22Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:43" name=28_add_search_table
2024-12-23T19:00:22.438734615Z time="2024-12-23T19:00:22Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:43" name=31_add_aggregate_index_to_fields
2024-12-23T19:00:22.449343408Z time="2024-12-23T19:00:22Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:43" name=03_default_instance
2024-12-23T19:00:22.479301778Z time="2024-12-23T19:00:22Z" level=info msg="starting migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:66" name=03_default_instance
2024-12-23T19:00:24.593466443Z time="2024-12-23T19:00:24Z" level=warning msg="add unique constraint failed" caller="/home/runner/work/zitadel/zitadel/internal/eventstore/v3/unique_constraints.go:78" error="ERROR: duplicate key value violates unique constraint \"unique_constraints_pkey\" (SQLSTATE 23505)"
2024-12-23T19:00:24.593647544Z time="2024-12-23T19:00:24Z" level=error msg="migration failed" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:68" error="ID=V3-DKcYh Message=Errors.Instance.Domain.AlreadyExists Parent=(ERROR: duplicate key value violates unique constraint \"unique_constraints_pkey\" (SQLSTATE 23505))" name=03_default_instance
2024-12-23T19:00:24.615924779Z time="2024-12-23T19:00:24Z" level=fatal msg="migration failed" caller="/home/runner/work/zitadel/zitadel/cmd/setup/setup.go:263" error="ID=V3-DKcYh Message=Errors.Instance.Domain.AlreadyExists Parent=(ERROR: duplicate key value violates unique constraint \"unique_constraints_pkey\" (SQLSTATE 23505))" name=03_default_instance
Ok stupid me... Had to remove volume of the db to ensure the migrations were reset. After that (and setting the permissions correctly), it works
I did remove volumes and recreate the entire stack several times, it didn not help. There is no permissions issues: no corresponding records in logs. Also I was able to launch the stack successfully without changing anything.
Same here
ERROR: duplicate key value violates unique constraint "events2_pkey" (SQLSTATE 23505)
This happens with version 2.66.1 (Cloud Version)
Same Version (2.66.1) in local docker (fresh setup) works as expected. it must be due to a data constellation on the cloud version (zitadel.cloud)
@bullbulk I am unable to reproduce your issue, I tried using the following config;
docker-compose.yml
services:
zitadel:
user: "${UID:-1000}"
restart: 'always'
networks:
- 'zitadel'
image: 'ghcr.io/zitadel/zitadel:v2.64.1'
command: 'start-from-init --masterkey "MasterkeyNeedsToHave32Characters" --tlsMode disabled'
env_file:
- ./zitadel.env
depends_on:
db:
condition: 'service_healthy'
ports:
- '8080:8080'
volumes:
- ./machinekey:/machinekey
db:
restart: 'always'
image: postgres:16-alpine
env_file:
- ./db.env
networks:
- 'zitadel'
healthcheck:
test: ["CMD-SHELL", "pg_isready", "-d", "zitadel", "-U", "postgres", "-c", "log_statement=all"]
interval: '10s'
timeout: '30s'
retries: 5
start_period: '20s'
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
networks:
zitadel:
zitadel.env
ZITADEL_LOG_LEVEL=debug
ZITADEL_MASTERKEY=hidden_key
ZITADEL_DATABASE_POSTGRES_HOST: db
ZITADEL_DATABASE_POSTGRES_PORT: 5432
ZITADEL_DATABASE_POSTGRES_DATABASE: zitadel
ZITADEL_DATABASE_POSTGRES_USER_USERNAME: user
ZITADEL_DATABASE_POSTGRES_USER_PASSWORD: password
ZITADEL_DATABASE_POSTGRES_USER_SSL_MODE: disable
ZITADEL_DATABASE_POSTGRES_ADMIN_USERNAME: postgres
ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD: postgres
ZITADEL_DATABASE_POSTGRES_ADMIN_SSL_MODE: disable
ZITADEL_FIRSTINSTANCE_MACHINEKEYPATH: /machinekey/zitadel-admin-sa.json
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_USERNAME: zitadel-admin-sa
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_NAME: Admin
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINEKEY_TYPE: 1
ZITADEL_EXTERNALSECURE=true
ZITADEL_TLS_ENABLED="false"
ZITADEL_EXTERNALPORT=443
ZITADEL_EXTERNALDOMAIN=auth.domain.com
db,.env:
PGUSER=postgres
POSTGRES_PASSWORD=postgres
Please can you try deleting your db volume and trying with the config above and confirm you still have the same issue?
I had the same issue and reverted to my usual suspect... "permissions".
The compose file has a volume for the machinekey and this folder is created with the incorrect permissions if you just use docker compose up. In my case it was owned by root but the user the container was running as was my user ( id user my id is 1000).
I gave this folder write access to my user and restarted the service and then it was able to create the zitadel-admin-sa.json file in the folder.
If the folder already exists, you can change the permission like this.
chown 1000:1000 machinekey
There after the service started normally and I could get to the initial page with /ui/console/ to get into the system.
I can confirm that the error occurs if there are permission mismatches as described by @LeonvanHeerden and I would guess that it also occurs when the file just does not exist.
Maybe an extra check could be added for this case and the error message improved?
I'm having the same issue with a fresh installation exactly as outlined in the documentation: https://zitadel.com/docs/self-hosting/deploy/compose
Why is this still a problem almost a year later?
i.e.:
# Download the docker compose example configuration.
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/self-hosting/deploy/docker-compose.yaml
# Make sure you have the latest image versions
docker compose pull
# Run the PostgreSQL database, the Zitadel API and the Zitadel login.
docker compose up
Just an update, I have to do the following to fix the default docker compose configuration as described in the documentation:
- I had to comment out all of the
ZITADEL_FIRSTINSTANCE_*environmental variables to fix the migration issues. - I had to add a volume for postgres to preserve postgres data:
postgres:/var/lib/postgresql/data - ~~I had to change the bind mount for
zitadelto a named volume:data:/current-dir:delegated~~ - ~~I had to change the bind mount for
loginto a named volume:data:/current-dir:ro~~
edit: using named volumes didn't eliminate the permission problems.
It would be ideal if the out of the box configuration worked, otherwise it's really discouraging for new users that are trying out the software. Authentik was up and running in under 5 minutes.
@NeurekaSoftware noted, I will review the docker compose config and revise it soon
I'm having the same issue with a fresh installation exactly as outlined in the documentation: https://zitadel.com/docs/self-hosting/deploy/compose
Why is this still a problem almost a year later?
i.e.:
Download the docker compose example configuration.
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/self-hosting/deploy/docker-compose.yaml
Make sure you have the latest image versions
docker compose pull
Run the PostgreSQL database, the Zitadel API and the Zitadel login.
docker compose up
@NeurekaSoftware I followed the instructions as you outlined above, I did not get this error, can you give me more details on your setup/OS/ect
FYI I'm on OSX 15.5 (24F74)
@NeurekaSoftware I followed the instructions as you outlined above, I did not get this error, can you give me more details on your setup/OS/ect
FYI I'm on OSX 15.5 (24F74)
Ubuntu 24.04 Latest version of Docker Engine installed via apt from the official docker repositories. The latest version of Zitadel.
Honestly, I understand auth is hard but setting up Zitadel is a wild ride.
Please look at competing software and see how easy they make it to get a production instance up and running.
I've put in a ridiculous amount of effort and I'm not being paid to do this. I don't really have anything else to say or contribute to this.
I appreciate the software and service being provided and hope the feedback given helps.
Hey @NeurekaSoftware
I am sorry to hear that you had a hard time, let me try to extract the learnings here.
You tried to use docker-compose for a production setting? So far we did not design this as being for production. Can you share some thoughts on the rational for using compose? The big challenge I would see with compose is that its not really easy to achieve things like high availabilty and scaling.
You tried to use docker-compose for a production setting? So far we did not design this as being for production.
A production setting for a small fleet of non critical servers. I do not need high availability and it seems you're missing the point.
I am sorry if it felt like I missed the point, what made you think that?
My intention was to also ask about how you operationalize Zitadel to have more context on the problem.
Looking at the comment made me ask my questions in the first place:
I had to comment out all of the ZITADEL_FIRSTINSTANCE_* environmental variables to fix the migration issues.
This one sounds weird, can you share the config you used and what stdout errors you saw?
I had to add a volume for postgres to preserve postgres data: postgres:/var/lib/postgresql/data
This is why I was asking about the prod setup. We so far did not intended to run Zitadel in a production way with that compose file so we did not supply a postgres config that persistet the data. However this is an easy fix.
@fforootd Look, I tried two fresh installs of Ubuntu 24.04 and used the default compose file and started it up exactly as outlined in the docs and there were errors. Other people have pointed this out too. I have no idea why you guys don't have the same issue unless a commit fixing the bug came out between now and then.
I've got it working after several days of digging and tweaking stuff. I don't have the patience to further help with this. Im not the only one having issues and it's not for a lack of experience. Zitadel is difficult to get setup compared to competing applications.
Edit: Sorry, I just realized this was the other issue. I outlined more details here: https://github.com/zitadel/zitadel/issues/10432
@NeurekaSoftware I tried using the default docker file on a fresh installation of Ubuntu 24.04 (via Virtualbox) and it worked, I was able to log in via the UI.
I'm really stumped by this.
@NeurekaSoftware @jonasbadstuebner @bullbulk if there is anything else in terms of config/setup that you haven't mentioned, please let me know
Steps to reproduce:
# 1. Create a new Ubuntu 24.04 VM (I used Hetzner Cloud)
apt update && apt upgrade -y
# 2. Set up `docker` and `docker compose` with https://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
# steps are copy-pasted from the official docs:
# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
# Add the repository to Apt sources:
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# 3. Add a user (I called it `notroot`):
useradd -d /home/notroot -m notroot
# 4. Add your user to the `docker` group: https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user
usermod -aG docker notroot
# 5. Become the user
sudo -u notroot -- bash
# 6. As the user, go to the home dir
cd /home/notroot
# 7. Download the docker-compose.yaml
wget https://raw.githubusercontent.com/zitadel/zitadel/main/docs/docs/self-hosting/deploy/docker-compose.yaml
# 8. Create a data-directory for the zitadel stuff
mkdir /home/notroot/zitadel-data
# 9. Change the current-dir to aim at the new data-directory
sed -i 's;.:/current-dir;./zitadel-data:/current-dir;g' docker-compose.yaml
# 10. Pull images
docker compose pull
# 11. Start
docker compose up
This causes this log (and more):
zitadel-1 | time="2025-08-18T16:17:30Z" level=error msg="migration failed" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:71" error="open /current-dir/login-client.pat: permission denied" name=03_default_instance
zitadel-1 | time="2025-08-18T16:17:30Z" level=error msg="migration failed" caller="/home/runner/work/zitadel/zitadel/cmd/setup/setup.go:362" error="open /current-dir/login-client.pat: permission denied" name=03_default_instance
zitadel-1 | time="2025-08-18T16:17:30Z" level=fatal msg="setup failed, skipping cleanup" caller="/home/runner/work/zitadel/zitadel/cmd/setup/setup.go:124" error="migration failed: open /current-dir/login-client.pat: permission denied"
zitadel-1 exited with code 1
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="initialization started" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/init.go:70"
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify user" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_user.go:40" username=zitadel
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify database" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_database.go:40" database=zitadel
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify grant" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_grant.go:35" database=zitadel user=zitadel
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify zitadel" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:80" database=zitadel
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify system" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:46"
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify encryption keys" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:51"
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify projections" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:56"
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify eventstore" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:61"
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify events tables" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:66"
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify unique constraints" caller="/home/runner/work/zitadel/zitadel/cmd/initialise/verify_zitadel.go:71"
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="setup started" caller="/home/runner/work/zitadel/zitadel/cmd/setup/setup.go:108"
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:46" name=14_events_push
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:46" name=40_init_push_func_v4
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:46" name=01_tables
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:46" name=02_assets
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:46" name=28_add_search_table
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:46" name=31_add_aggregate_index_to_fields
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:46" name=46_init_permission_functions
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="verify migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:46" name=03_default_instance
zitadel-1 | time="2025-08-18T16:17:31Z" level=info msg="starting migration" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:69" name=03_default_instance
zitadel-1 | time="2025-08-18T16:17:33Z" level=warning msg="add unique constraint failed" caller="/home/runner/work/zitadel/zitadel/internal/eventstore/v3/unique_constraints.go:78" error="ERROR: duplicate key value violates unique constraint \"unique_constraints_pkey\" (SQLSTATE 23505)"
zitadel-1 | time="2025-08-18T16:17:33Z" level=error msg="migration failed" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:71" error="ID=V3-DKcYh Message=Errors.Instance.Domain.AlreadyExists Parent=(ERROR: duplicate key value violates unique constraint \"unique_constraints_pkey\" (SQLSTATE 23505))" name=03_default_instance
zitadel-1 | time="2025-08-18T16:17:33Z" level=error msg="migration failed" caller="/home/runner/work/zitadel/zitadel/cmd/setup/setup.go:362" code=23505 detail="Key (instance_id, unique_type, unique_field)=(, instance_domain, localhost) already exists." error="ID=V3-DKcYh Message=Errors.Instance.Domain.AlreadyExists
zitadel-1 exited with code 1
The (imho) most important log lines are these:
- The real error (where I think everything should stop):
zitadel-1 | time="2025-08-18T16:17:30Z" level=error msg="migration failed" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:71" error="open /current-dir/login-client.pat: permission denied" name=03_default_instance
- The reason the real error is not always caught by the user (the container restarts and more log is produced):
zitadel-1 exited with code 1
- The messages where every attempt of migrating the database fails (which does not tell me anything about a missing/inaccessible file - I actually retried starting the containers, with another
docker compose up):
zitadel-1 | time="2025-08-18T16:23:49Z" level=warning msg="add unique constraint failed" caller="/home/runner/work/zitadel/zitadel/internal/eventstore/v3/unique_constraints.go:78" error="ERROR: duplicate key value violates unique constraint \"unique_constraints_pkey\" (SQLSTATE 23505)"
zitadel-1 | time="2025-08-18T16:23:49Z" level=error msg="migration failed" caller="/home/runner/work/zitadel/zitadel/internal/migration/migration.go:71" error="ID=V3-DKcYh Message=Errors.Instance.Domain.AlreadyExists Parent=(ERROR: duplicate key value violates unique constraint \"unique_constraints_pkey\" (SQLSTATE 23505))" name=03_default_instance
zitadel-1 | time="2025-08-18T16:23:49Z" level=error msg="migration failed" caller="/home/runner/work/zitadel/zitadel/cmd/setup/setup.go:362" code=23505 detail="Key (instance_id, unique_type, unique_field)=(, instance_domain, localhost) already exists." error="ID=V3-DKcYh Message=Errors.Instance.Domain.AlreadyExists Parent=(ERROR: duplicate key value violates unique constraint \"unique_constraints_pkey\" (SQLSTATE 23505))" hint= message="duplicate key value violates unique constraint \"unique_constraints_pkey\"" name=03_default_instance severity=ERROR
zitadel-1 | time="2025-08-18T16:23:49Z" level=fatal msg="setup failed, skipping cleanup" caller="/home/runner/work/zitadel/zitadel/cmd/setup/setup.go:124" error="migration failed: ID=V3-DKcYh Message=Errors.Instance.Domain.AlreadyExists Parent=(ERROR: duplicate key value violates unique constraint \"unique_constraints_pkey\" (SQLSTATE 23505))"
If step 8 and 9 in my instructions are skipped, it works as expected. But imho, it should also work within a subdirectory. Or at least print out a different error message, that lets the user know (even after restarting), that the file is missing or inaccessible. It's also probably not only a demo-problem, but also a problem with the "real thing" Zitadel deployment, that gets used for production. But its maybe more unlikely to happen there.
I hope this helps with catching the problem. Please let me know, if you need more information.
@jonasbadstuebner thanks a lot for the details, I'll get on this and give an update soon 👍
@fforootd Look, I tried two fresh installs of Ubuntu 24.04 and used the default compose file and started it up exactly as outlined in the docs and there were errors. Other people have pointed this out too. I have no idea why you guys don't have the same issue unless a commit fixing the bug came out between now and then.
I've got it working after several days of digging and tweaking stuff. I don't have the patience to further help with this. Im not the only one having issues and it's not for a lack of experience. Zitadel is difficult to get setup compared to competing applications.
Edit: Sorry, I just realized this was the other issue. I outlined more details here: #10432
I see that add a little more context to the picture. @eliobischof correct my if I am wrong but #10432 was a bug that we fixed, or am wrong?
@NeurekaSoftware @jonasbadstuebner @bullbulk OK, so finally got to the bottom of this.
In docker the read/write permissions of the containers map to the read/write permissions of external mounted volumes based on the UID/GID.
That means that if an external folder is owned by user notroot which has a UID of 1001, then in order for a running process in a container to access that external folder, that running process has to be owned by a user which also has the UID of 1001.
The default UID of a container is 1000.
I will update the docs soon, but in the meantime you should be able to get Zitadel up and running by:
- delete
docker volume rm <volume_id>the previously created volumes. - run the docker-compose below via;
ID=$(id -u) docker compose up
services:
zitadel:
restart: unless-stopped
image: ghcr.io/zitadel/zitadel:latest
user: "${ID}:"
command: start-from-init --masterkey "MasterkeyNeedsToHave32Characters" --tlsMode disabled
environment:
ZITADEL_EXTERNALSECURE: false
ZITADEL_TLS_ENABLED: false
ZITADEL_DATABASE_POSTGRES_HOST: db
ZITADEL_DATABASE_POSTGRES_PORT: 5432
ZITADEL_DATABASE_POSTGRES_DATABASE: zitadel
ZITADEL_DATABASE_POSTGRES_USER_USERNAME: zitadel
ZITADEL_DATABASE_POSTGRES_USER_PASSWORD: zitadel
ZITADEL_DATABASE_POSTGRES_USER_SSL_MODE: disable
ZITADEL_DATABASE_POSTGRES_ADMIN_USERNAME: postgres
ZITADEL_DATABASE_POSTGRES_ADMIN_PASSWORD: postgres
ZITADEL_DATABASE_POSTGRES_ADMIN_SSL_MODE: disable
# By configuring a login client, the setup job creates a user of type machine with the role IAM_LOGIN_CLIENT.
# It writes a PAT to the path specified in ZITADEL_FIRSTINSTANCE_LOGINCLIENTPATPATH.
# The PAT is passed to the login container via the environment variable ZITADEL_SERVICE_USER_TOKEN_FILE.
ZITADEL_FIRSTINSTANCE_LOGINCLIENTPATPATH: /current-dir/login-client.pat
ZITADEL_FIRSTINSTANCE_ORG_HUMAN_PASSWORDCHANGEREQUIRED: false
ZITADEL_FIRSTINSTANCE_ORG_LOGINCLIENT_MACHINE_USERNAME: login-client
ZITADEL_FIRSTINSTANCE_ORG_LOGINCLIENT_MACHINE_NAME: Automatically Initialized IAM_LOGIN_CLIENT
ZITADEL_FIRSTINSTANCE_ORG_LOGINCLIENT_PAT_EXPIRATIONDATE: '2029-01-01T00:00:00Z'
ZITADEL_DEFAULTINSTANCE_FEATURES_LOGINV2_REQUIRED: true
ZITADEL_DEFAULTINSTANCE_FEATURES_LOGINV2_BASEURI: http://localhost:3000/ui/v2/login
ZITADEL_OIDC_DEFAULTLOGINURLV2: http://localhost:3000/ui/v2/login/login?authRequest=
ZITADEL_OIDC_DEFAULTLOGOUTURLV2: http://localhost:3000/ui/v2/login/logout?post_logout_redirect=
ZITADEL_SAML_DEFAULTLOGINURLV2: http://localhost:3000/ui/v2/login/login?samlRequest=
# By configuring a machine, the setup job creates a user of type machine with the role IAM_OWNER.
# It writes a personal access token (PAT) to the path specified in ZITADEL_FIRSTINSTANCE_PATPATH.
# The PAT can be used to provision resources with [Terraform](/docs/guides/manage/terraform-provider), for example.
ZITADEL_FIRSTINSTANCE_PATPATH: /current-dir/admin.pat
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_USERNAME: admin
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINE_NAME: Automatically Initialized IAM_OWNER
ZITADEL_FIRSTINSTANCE_ORG_MACHINE_MACHINEKEY_TYPE: 1
healthcheck:
test:
- CMD
- /app/zitadel
- ready
interval: 10s
timeout: 60s
retries: 5
start_period: 10s
volumes:
- ./zitadel-data:/current-dir:delegated
ports:
- 8080:8080
- 3000:3000
networks:
- zitadel
depends_on:
db:
condition: service_healthy
login:
restart: unless-stopped
image: ghcr.io/zitadel/zitadel-login:latest
# If you can't use the network_mode service:zitadel, you can pass the environment variable CUSTOM_REQUEST_HEADERS=Host:localhost instead.
environment:
- ZITADEL_API_URL=http://localhost:8080
- NEXT_PUBLIC_BASE_PATH=/ui/v2/login
- ZITADEL_SERVICE_USER_TOKEN_FILE=/current-dir/login-client.pat
user: "${ID}:"
network_mode: service:zitadel
volumes:
- ./zitadel-data:/current-dir:ro
depends_on:
zitadel:
condition: service_healthy
restart: false
db:
restart: unless-stopped
image: postgres:17-alpine
environment:
PGUSER: postgres
POSTGRES_PASSWORD: postgres
healthcheck:
test:
- CMD-SHELL
- pg_isready
- -d
- zitadel
- -U
- postgres
interval: 10s
timeout: 30s
retries: 5
start_period: 20s
networks:
- zitadel
networks:
zitadel:
@NeurekaSoftware on behalf of Zitadel I would like to offer an apology that your experience with our product hasn't been as positive as it should have been. That said, I would urge you to give the above a try and should you have any more issues, you can reach out, and we will do our very best to assist you.
@jonasbadstuebner please let me know if example above works for you
I propose here to initially set up with containers running as root to avoid any file permission issues and later on switch to non-root users. Wdyt?
I propose here to initially set up with containers running as root to avoid any file permission issues and later on switch to non-root users. Wdyt?
My only concern was that, it would be a security risk, but you state
Run the containers as non-root users: Remove the user: "0" lines from the docker-compose.yaml file.
so I think it's good 👍