immich
immich copied to clipboard
[BUG] duplicate key value violates unique constraint "UQ_assets_owner_library_checksum"
The bug
I was trying to sync my iPhone photo library to immich for the first time. I noticed that some photos would get stuck at 100% upload progress. I checked the DB log and found this error. I want to know how to fix this problem.
2023-11-15 16:07:33.335 UTC [2295] ERROR: duplicate key value violates unique constraint "UQ_assets_owner_library_checksum"
2023-11-15 16:07:33.335 UTC [2295] DETAIL: Key ("ownerId", "libraryId", checksum)=(fad7c823-c114-4d62-9212-f1e41b2ae58e, 9d136cd3-5c62-4625-85c5-f6569e794cde, \xfd38f18c36f42908a2a8a14f287b4a39ec3f647e) already exists.
The OS that Immich Server is running on
Ubuntu 22.04
Version of Immich Server
v1.86.0
Version of Immich Mobile App
v1.86.0
Platform with the issue
- [X] Server
- [ ] Web
- [X] Mobile
Your docker-compose.yml content
version: "3.8"
services:
immich-server:
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
command: [ "start.sh", "immich" ]
volumes:
- ${UPLOAD_LOCATION}:/usr/src/app/upload
- /etc/localtime:/etc/localtime:ro
environment:
- NODE_ENV=production
- TYPESENSE_API_KEY=${TYPESENSE_API_KEY}
- DB_PASSWORD=${DB_PASSWORD}
- DB_HOSTNAME=${DB_HOSTNAME}
- DB_USERNAME=${DB_USERNAME}
- DB_DATABASE_NAME=${DB_DATABASE_NAME}
- REDIS_HOSTNAME=${REDIS_HOSTNAME}
- TYPESENSE_HOST=${TYPESENSE_HOSTNAME}
depends_on:
- redis
- database
- typesense
networks:
- default
- traefik-public
deploy:
placement:
constraints:
- node.hostname == node1
labels:
traefik.enable: "true"
## HTTP Routers
traefik.http.routers.immich-api.entrypoints: websecure
traefik.http.routers.immich-api.rule: "Host(`${PUBLIC_HOSTNAME}`) && Pathprefix(`/api`)"
traefik.http.routers.immich-api.middlewares: service-immich-api-strip
## Middlewares
traefik.http.middlewares.service-immich-api-strip.stripprefix.prefixes: "/api"
## HTTP Services
traefik.http.routers.immich-api.service: immich-api
traefik.http.services.immich-api.loadbalancer.server.port: "3001"
immich-microservices:
image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
command: [ "start.sh", "microservices" ]
volumes:
- ${UPLOAD_LOCATION}:/usr/src/app/upload
- /etc/localtime:/etc/localtime:ro
environment:
- NODE_ENV=production
- TYPESENSE_API_KEY=${TYPESENSE_API_KEY}
- DB_PASSWORD=${DB_PASSWORD}
- DB_HOSTNAME=${DB_HOSTNAME}
- DB_USERNAME=${DB_USERNAME}
- DB_DATABASE_NAME=${DB_DATABASE_NAME}
- REDIS_HOSTNAME=${REDIS_HOSTNAME}
- TYPESENSE_HOST=${TYPESENSE_HOSTNAME}
- TZ=Asia/Shanghai
depends_on:
- redis
- database
- typesense
deploy:
placement:
constraints:
- node.labels.capability == compute
resources:
limits:
cpus: "8"
memory: 4G
immich-machine-learning:
image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
volumes:
- model-cache:/cache
environment:
- NODE_ENV=production
- TYPESENSE_API_KEY=${TYPESENSE_API_KEY}
- DB_PASSWORD=${DB_PASSWORD}
- DB_HOSTNAME=${DB_HOSTNAME}
- DB_USERNAME=${DB_USERNAME}
- DB_DATABASE_NAME=${DB_DATABASE_NAME}
- REDIS_HOSTNAME=${REDIS_HOSTNAME}
- MACHINE_LEARNING_REQUEST_THREADS=${ML_WORKERS}
deploy:
placement:
constraints:
- node.labels.capability == compute
resources:
limits:
cpus: ${ML_WORKERS}
memory: 4G
immich-web:
image: ghcr.io/immich-app/immich-web:${IMMICH_VERSION:-release}
environment:
- NODE_ENV=production
- TYPESENSE_API_KEY=${TYPESENSE_API_KEY}
- DB_PASSWORD=${DB_PASSWORD}
- DB_HOSTNAME=${DB_HOSTNAME}
- DB_USERNAME=${DB_USERNAME}
- DB_DATABASE_NAME=${DB_DATABASE_NAME}
- REDIS_HOSTNAME=${REDIS_HOSTNAME}
networks:
- default
- traefik-public
deploy:
labels:
traefik.enable: "true"
## HTTP Routers
traefik.http.routers.immich-web.entrypoints: websecure
traefik.http.routers.immich-web.rule: Host(`${PUBLIC_HOSTNAME}`)
## HTTP Services
traefik.http.routers.immich-web.service: immich-web
traefik.http.services.immich-web.loadbalancer.server.port: 3000
typesense:
image: typesense/typesense:0.24.1@sha256:9bcff2b829f12074426ca044b56160ca9d777a0c488303469143dd9f8259d4dd
command: ["--peering-subnet","10.0.4.0/24"]
environment:
- TYPESENSE_API_KEY=${TYPESENSE_API_KEY}
- TYPESENSE_DATA_DIR=/data
# remove this to get debug messages
- GLOG_minloglevel=1
volumes:
- tsdata:/data
deploy:
placement:
constraints:
- node.hostname == node2
redis:
image: redis:6.2-alpine@sha256:3995fe6ea6a619313e31046bd3c8643f9e70f8f2b294ff82659d409b47d06abb
database:
image: postgres:14-alpine@sha256:874f566dd512d79cf74f59754833e869ae76ece96716d153b0fa3e64aec88d92
environment:
POSTGRES_PASSWORD: ${DB_PASSWORD}
POSTGRES_USER: ${DB_USERNAME}
POSTGRES_DB: ${DB_DATABASE_NAME}
volumes:
- pgdata:/var/lib/postgresql/data
deploy:
placement:
constraints:
- node.hostname == node2
volumes:
pgdata:
model-cache:
tsdata:
networks:
default:
name: immich_internal
external: true
# Use the previously created public network "traefik-public", shared with other
# services that need to be publicly available via this Traefik
traefik-public:
external: true
Your .env content
No .env file due to limitation of portainer with Docker Swarm. I put all of them in the compose file.
Reproduction steps
Just sync my photos.
Additional information
No response
Can you try accessing over local IP and uploading? Your reverse proxy might have limitation on the max body size that hasn't been set up correctly yet
Okay, I can have a try later. But I have uploaded a 1GB+ video successfully and the failed items are just HEIC image.
Can you grab the mobile log?
Oh, I have this bug as well, but related to #1517 after firmware change (or device change, since app thinks that way).
Okay, I can upload all the photos now. I think this might be transient issue on the server side. The duplicated key error seems not to be related to the upload issue. Maybe something wrong on the data or just there are duplicated photos?
Having the same issue. Adding the below lines to my immich docker-compose
in the labels section seemed to help get a few images uploaded that were previously stuck, but still stuck on 1 particular image leaving the app stuck at 529 remaining assets.
labels:
traefik.enable: "true"
traefik.docker.network: "proxy"
traefik.http.routers.immich.rule: "Host(`${FQDN}`)"
traefik.http.services.immich-api.loadbalancer.server.port: "3001"
# Sets the maximum request body to 10GB
traefik.http.routers.immich.middlewares: limit@docker
traefik.http.middlewares.limit.buffering.maxRequestBodyBytes: 10737418240
traefik.http.middlewares.limit.buffering.memRequestBodyBytes: 200000000
traefik.http.middlewares.limit.buffering.maxResponseBodyBytes: 10737418240
traefik.http.middlewares.limit.buffering.memResponseBodyBytes: 200000000
If I do a library refresh I get a bunch of these kicked out in the log. The job seems to complete. is it possible the database has some duplicates before the constraint was set? Or maybe refresh library just has a bug and tries to recreate existing entries? Maybe those of you with the problem could try refresh libraries on the jobs page. I don't use upload so my case has to be unrelated to that.
[Nest] 7 - 12/17/2023, 3:03:30 PM ERROR [JobService] Unable to run job handler (library/library-refresh-asset): QueryFailedError: duplicate key value violates unique constraint "UQ_assets_owner_library_checksum" [Nest] 7 - 12/17/2023, 3:03:30 PM ERROR [JobService] QueryFailedError: duplicate key value violates unique constraint "UQ_assets_owner_library_checksum" at PostgresQueryRunner.query (/usr/src/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:211:19) at process.processTicksAndRejections (node:internal/process/task_queues:95:5) at async InsertQueryBuilder.execute (/usr/src/app/node_modules/typeorm/query-builder/InsertQueryBuilder.js:106:33) at async SubjectExecutor.executeInsertOperations (/usr/src/app/node_modules/typeorm/persistence/SubjectExecutor.js:260:42) at async SubjectExecutor.execute (/usr/src/app/node_modules/typeorm/persistence/SubjectExecutor.js:92:9) at async EntityPersistExecutor.execute (/usr/src/app/node_modules/typeorm/persistence/EntityPersistExecutor.js:140:21) at async LibraryService.handleAssetRefresh (/usr/src/app/dist/domain/library/library.service.js:205:32) at async /usr/src/app/dist/domain/job/job.service.js:113:37 at async Worker.processJob (/usr/src/app/node_modules/bullmq/dist/cjs/classes/worker.js:387:28) at async Worker.retryIfFailed (/usr/src/app/node_modules/bullmq/dist/cjs/classes/worker.js:574:24) [Nest] 7 - 12/17/2023, 3:03:30 PM ERROR [JobService] Object: { "id": "b1adffef-e8ce-4115-b3f8-1d53fc8db358", "assetPath": "/media/pictures/Grace/2022/Egypt/2022-03-14_05.59.36_edTMP-1.JPG", "ownerId": "acc48138-156d-4462-8430-7a72280f18ab", "force": false }
I see the same errors in the log as @dsm1212 I will let the job run and hope everything will remains working.
I had some photos from a photographer who didn't set the year correctly on her camera. When I fixed them and rescanned all but two shifted on the timeline. The two that didn't had this error in the log. If I touch the files and scan again it tries again and fails the same way. So there are two with the issue on this particular folder I can easily reproduce every time.
The key seems to be owner-library-checksum. If the checksum is on the photo content I'm wondering if the photographer copied the file but I don't see a duplicate just browsing through them. I think this constraint is a little risky because people do copy files for a variety of reasons. Or does the checksum include the filename?
FWIW I notice that if I open the photo, hit the elipsis on upper right and pick refresh metadata, then that works. it read the new file exif date I had corrected and it moved on the timeline. So It must be some bug in the library scan logic that decides the image needs to be added when it already exists. I'll stare at the code if you can point me to it. I see the clientApi is calling scan but I wasn't sure where that came from.
fwiw.. There are a lot of these in the logs, all the time, every time an import/upload kicks in for new pictures added to the external library. The resource is available in search (so its imported fine) but we still see the duplicate key error. Lots of noise in the logs. e.g.
2024-05-13 19:01:57 [Nest] 7 - 05/14/2024, 12:01:57 AM ERROR [JobService] Unable to run job handler (library/library-refresh-asset): QueryFailedError: duplicate key value violates unique constraint "UQ_assets_owner_library_checksum"
2024-05-13 19:01:57 [Nest] 7 - 05/14/2024, 12:01:57 AM ERROR [JobService] QueryFailedError: duplicate key value violates unique constraint "UQ_assets_owner_library_checksum"
2024-05-13 19:01:57 at PostgresQueryRunner.query (/usr/src/app/node_modules/typeorm/driver/postgres/PostgresQueryRunner.js:219:19)
2024-05-13 19:01:57 at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
2024-05-13 19:01:57 at async InsertQueryBuilder.execute (/usr/src/app/node_modules/typeorm/query-builder/InsertQueryBuilder.js:106:33)
2024-05-13 19:01:57 at async SubjectExecutor.executeInsertOperations (/usr/src/app/node_modules/typeorm/persistence/SubjectExecutor.js:260:42)
2024-05-13 19:01:57 at async SubjectExecutor.execute (/usr/src/app/node_modules/typeorm/persistence/SubjectExecutor.js:92:9)
2024-05-13 19:01:57 at async EntityPersistExecutor.execute (/usr/src/app/node_modules/typeorm/persistence/EntityPersistExecutor.js:140:21)
2024-05-13 19:01:57 at async LibraryService.handleAssetRefresh (/usr/src/app/dist/services/library.service.js:410:32)
2024-05-13 19:01:57 at async /usr/src/app/dist/services/job.service.js:149:36
2024-05-13 19:01:57 at async Worker.processJob (/usr/src/app/node_modules/bullmq/dist/cjs/classes/worker.js:394:28)
2024-05-13 19:01:57 at async Worker.retryIfFailed (/usr/src/app/node_modules/bullmq/dist/cjs/classes/worker.js:581:24)
2024-05-13 19:01:57 [Nest] 7 - 05/14/2024, 12:01:57 AM ERROR [JobService] Object:
2024-05-13 19:01:57 {
2024-05-13 19:01:57 "id": "d69c13a9-dcba-4f13-96cc-2c748284ad53",
2024-05-13 19:01:57 "assetPath": "/mnt/media/AD/R/saved/media/mara_1brdrf9.png",
2024-05-13 19:01:57 "ownerId": "dbe1e8db-c628-436a-88b3-d8867c4851e0",
2024-05-13 19:01:57 "force": false
2024-05-13 19:01:57 }
2024-05-13 19:01:57
me too
I had the same issue caused by putting upload folder inside a library with original photos :)
resolved by moving upload folder to different directory
I think there are a few ways this constraint can be triggered. The original issue was due to live photos uploaded from iOS, which has been resolved.
I think there are a few ways this constraint can be triggered. The original issue was due to live photos uploaded from iOS, which has been resolved.
@jrasm91 I'm still getting this error in Postgres logs:
2024-09-02 10:56:17.757 EDT [2369772] immich@immich ERROR: duplicate key value violates unique constraint "UQ_assets_owner_checksum"
2024-09-02 10:56:18.389 EDT [2369772] immich@immich ERROR: duplicate key value violates unique constraint "UQ_assets_owner_checksum"
2024-09-02 10:56:19.197 EDT [2369772] immich@immich ERROR: duplicate key value violates unique constraint "UQ_assets_owner_checksum"
2024-09-02 10:56:19.629 EDT [2369772] immich@immich ERROR: duplicate key value violates unique constraint "UQ_assets_owner_checksum"
2024-09-02 10:56:19.914 EDT [2369772] immich@immich ERROR: duplicate key value violates unique constraint "UQ_assets_owner_checksum"
2024-09-02 10:56:20.216 EDT [2369772] immich@immich ERROR: duplicate key value violates unique constraint "UQ_assets_owner_checksum"
2024-09-02 10:56:20.443 EDT [2369772] immich@immich ERROR: duplicate key value violates unique constraint "UQ_assets_owner_checksum"
2024-09-02 10:56:21.140 EDT [2369772] immich@immich ERROR: duplicate key value violates unique constraint "UQ_assets_owner_checksum"
2024-09-02 10:56:21.897 EDT [2369772] immich@immich ERROR: duplicate key value violates unique constraint "UQ_assets_owner_checksum"