immich icon indicating copy to clipboard operation
immich copied to clipboard

Multiple Memories at top

Open HippoBloke opened this issue 9 months ago • 4 comments

I have searched the existing issues to make sure this is not a duplicate report.

  • [x] Yes

The bug

Hi all

I seem to have multiple memories for each year. Any idea ?

I may have started additional jobs in my impatience to see them after the recent change but they continue after a restart. Is there any way I can find active jobs and stop the extras ?

Thanks Hippo

The OS that Immich Server is running on

Unraid 7.0.1

Version of Immich Server

v1.28.0

Version of Immich Mobile App

v1.28.0

Platform with the issue

  • [ ] Server
  • [x] Web
  • [x] Mobile

Your docker-compose.yml content

#
# WARNING: Make sure to use the docker-compose.yml of the current release:
#
# https://github.com/immich-app/immich/releases/latest/download/docker-compose.yml
#
# The compose file on main may not be compatible with the latest release.
#

name: immich

services:
  immich-server:
    container_name: immich_server
    image: ghcr.io/immich-app/immich-server:${IMMICH_VERSION:-release}
    # extends:
    #   file: hwaccel.transcoding.yml
    #   service: cpu # set to one of [nvenc, quicksync, rkmpp, vaapi, vaapi-wsl] for accelerated transcoding
    volumes:
      # Do not edit the next line. If you want to change the media storage location on your system, edit the value of UPLOAD_LOCATION in the .env file
      - ${UPLOAD_LOCATION}:/usr/src/app/upload
      - /etc/localtime:/etc/localtime:ro
      - /mnt/user/iCloud_Photos:/import
      - /mnt/user/Cine:/Cine
    env_file:
      - .env
    ports:
      - '2283:2283'
    depends_on:
      - redis
      - database
    restart: always
    healthcheck:
      disable: false
      
  redis:
    container_name: immich_redis
    image: docker.io/redis:6.2-alpine@sha256:2ba50e1ac3a0ea17b736ce9db2b0a9f6f8b85d4c27d5f5accc6a416d8f42c6d5
    volumes:
      - /mnt/user/Immich/redis:/data
    healthcheck:
      test: redis-cli ping || exit 1
    restart: always

  immich-machine-learning:
    container_name: immich_machine_learning
    # For hardware acceleration, add one of -[armnn, cuda, openvino] to the image tag.
    # Example tag: ${IMMICH_VERSION:-release}-cuda
    image: ghcr.io/immich-app/immich-machine-learning:${IMMICH_VERSION:-release}
    # extends: # uncomment this section for hardware acceleration - see https://immich.app/docs/features/ml-hardware-acceleration
    #   file: hwaccel.ml.yml
    #   service: cpu # set to one of [armnn, cuda, openvino, openvino-wsl] for accelerated inference - use the `-wsl` version for WSL2 where applicable
    volumes:
      - /mnt/user/Immich/cache:/cache
    env_file:
      - .env
    restart: always
    healthcheck:
      disable: false

  database:
    container_name: immich_postgres
    image: docker.io/tensorchord/pgvecto-rs:pg14-v0.2.0@sha256:90724186f0a3517cf6914295b5ab410db9ce23190a2d9d0b9dd6463e3fa298f0
    environment:
      POSTGRES_PASSWORD: ${DB_PASSWORD}
      POSTGRES_USER: ${DB_USERNAME}
      POSTGRES_DB: ${DB_DATABASE_NAME}
      POSTGRES_INITDB_ARGS: '--data-checksums'
    ports:
     - '5432:5432'
    volumes:
      # Do not edit the next line. If you want to change the database storage location on your system, edit the value of DB_DATA_LOCATION in the .env file
      - ${DB_DATA_LOCATION}:/var/lib/postgresql/data
    healthcheck:
      test: pg_isready --dbname='${DB_DATABASE_NAME}' --username='${DB_USERNAME}' || exit 1; Chksum="$$(psql --dbname='${DB_DATABASE_NAME}' --username='${DB_USERNAME}' --tuples-only --no-align --command='SELECT COALESCE(SUM(checksum_failures), 0) FROM pg_stat_database')"; echo "checksum failure count is $$Chksum"; [ "$$Chksum" = '0' ] || exit 1
      interval: 5m
      start_interval: 30s
      start_period: 5m
    command:
      [
        'postgres',
        '-c',
        'shared_preload_libraries=vectors.so',
        '-c',
        'search_path="$$user", public, vectors',
        '-c',
        'logging_collector=on',
        '-c',
        'max_wal_size=2GB',
        '-c',
        'shared_buffers=512MB',
        '-c',
        'wal_compression=on',
      ]
    restart: always

volumes:
  model-cache:

Your .env content

# You can find documentation for all the supported env variables at https://immich.app/docs/install/environment-variables

# The location where your uploaded files are stored
UPLOAD_LOCATION=/mnt/user/Immich
# The location where your database files are stored
DB_DATA_LOCATION=/mnt/user/Immich/database

# To set a timezone, uncomment the next line and change Etc/UTC to a TZ identifier from this list: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List
TZ=Europe/London

# The Immich version to use. You can pin this to a specific version like "v1.71.0"
IMMICH_VERSION=release

# Connection secret for postgres. You should change it to a random password
# Please use only the characters `A-Za-z0-9`, without special characters or spaces
DB_PASSWORD=xxxxxx

# The values below this line do not need to be changed
###################################################################################
DB_USERNAME=xxxxxx
DB_DATABASE_NAME=xxxxxx

Reproduction steps

...

Relevant log output


Additional information

No response

HippoBloke avatar Mar 02 '25 12:03 HippoBloke

I have the same issue. As I commented here:

It definitely should show the job in the list like all the other jobs. Users get confused as to why the memory generation job isn't being created and try multiple times, leading to this.

Image

YarosMallorca avatar Mar 02 '25 12:03 YarosMallorca

Please follow the exact step below to resolve this issue. From the command line, run the following command to clean the state of the memories generation

docker exec immich_postgres psql --dbname=immich --username=postgres -c "delete from system_metadata where key like 'memories-state'; truncate table memory cascade;"

Then go into the job page, and run the memories generation jobs

Image

@YarosMallorca That was a bug in 1.127.0, from 1.128.0 no duplicate memories should be generated

alextran1502 avatar Mar 02 '25 12:03 alextran1502

Worked perfectly, thanks @alextran1502

I think this can be closed now, unless @HippoBloke is referring to a different issue...

YarosMallorca avatar Mar 02 '25 13:03 YarosMallorca

All good here - thanks @alextran1502 - you are a star !!

HippoBloke avatar Mar 02 '25 13:03 HippoBloke

Had a similar issue, all my memories were misplaced by one year (so memories from 2024 showed 0 years ago, 2023 showed 1 year ago etc. After running the given command, everythink works OK again :) Thanks a lot!

res80 avatar Mar 02 '25 14:03 res80

@HippoBloke, if everything works as expected, please close this issue. If it doesn't describe the issue.

YarosMallorca avatar Mar 02 '25 16:03 YarosMallorca

Please follow the exact step below to resolve this issue. From the command line, run the following command to clean the state of the memories generation

docker exec immich_postgres psql --dbname=immich --username=postgres -c "delete from system_metadata where key like 'memories-state'; truncate table memories cascade;"

It worked for me, although there should be gui fix for that. Memory cleanup doesn't work for that.

waclaw66 avatar Mar 02 '25 16:03 waclaw66

Please follow the exact step below to resolve this issue. From the command line, run the following command to clean the state of the memories generation

docker exec immich_postgres psql --dbname=immich --username=postgres -c "delete from system_metadata where key like 'memories-state'; truncate table memories cascade;"

Then go into the job page, and run the memories generation jobs

Image

@YarosMallorca That was a bug in 1.127.0, from 1.128.0 no duplicate memories should be generated

Where do I run this command? I have setup immich under dockge on my TrueNAS. Do I use the TrueNAS Shell, dockge >_Batch or something else?

Nordlicht-13 avatar Mar 02 '25 16:03 Nordlicht-13

@waclaw66 shouldn't happen again, the bug was from memories generated from 1.127.0, Any generated memories going forward won't have this issue

alextran1502 avatar Mar 02 '25 16:03 alextran1502

Where do I run this command? I have setup immich under dockge on my TrueNAS. Do I use the TrueNAS Shell, dockge >_Batch or something else?

Okay, made it. I ran psql --dbname=immich --username=postgres -c "delete from system_metadata where key like 'memories-state'; truncate table memories cascade;" under the >_Batch of the database in dockge

Nordlicht-13 avatar Mar 02 '25 17:03 Nordlicht-13

I'm having the same issue here but I'm hosting immich in truenas (using the provided version) and I don't have a dedicated container for postgres.

how can I get this fixed?

AndresSalinasB avatar Mar 03 '25 04:03 AndresSalinasB

I'm having the same issue here but I'm hosting immich in truenas (using the provided version) and I don't have a dedicated container for postgres.

how can I get this fixed?

In the TrueNAS-App the Postgres Data Storage is the last volume at the bottom. You can start a shell of the database-volume under the point "Workloads" of the app. There you can ran psql --dbname=immich --username=postgres -c "delete from system_metadata where key like 'memories-state'; truncate table memories cascade;"

Nordlicht-13 avatar Mar 03 '25 09:03 Nordlicht-13

Please follow the exact step below to resolve this issue. From the command line, run the following command to clean the state of the memories generation

docker exec immich_postgres psql --dbname=immich --username=postgres -c "delete from system_metadata where key like 'memories-state'; truncate table memories cascade;"

Then go into the job page, and run the memories generation jobs

Image

@YarosMallorca That was a bug in 1.127.0, from 1.128.0 no duplicate memories should be generated

This worked for me, thank you. I didn't have duplicate memories but memories were showing a year less than what they were. Now that's fixed. Was this a one-off or will it need running again until a fix is provided in a future version?

dinosmm avatar Mar 03 '25 10:03 dinosmm

Was this a one-off or will it need running again until a fix is provided in a future version?

@dinosmm This was a one-off in v1.127.0. Unless this bug somehow reappears, it isn't necessary to do it in the future. It has been patched by v1.128.0

YarosMallorca avatar Mar 03 '25 10:03 YarosMallorca

Thanks. Had the same issue since 1.28.0, but the workaround works!

ARVEDz avatar Mar 03 '25 11:03 ARVEDz

I don't think the way memories work now is practical and whatever speed benefits or others it might have brought forward are not worth the drawbacks it's generating.

Thanks for sharing how to fix this in my now broken DB but it would be better of this whole feature was restored to how it was working last year.

kikendo avatar Mar 03 '25 15:03 kikendo

I'm having the same issue here but I'm hosting immich in truenas (using the provided version) and I don't have a dedicated container for postgres. how can I get this fixed?

In the TrueNAS-App the Postgres Data Storage is the last volume at the bottom. You can start a shell of the database-volume under the point "Workloads" of the app. There you can ran psql --dbname=immich --username=postgres -c "delete from system_metadata where key like 'memories-state'; truncate table memories cascade;"

Hey dude,

can you help me on that? I can't get it to run. Which workload should I pick? Permissions is the last on my side. When I click on shell that came up: "/bin/sh" When I klick enter I'm in the shell overview and when I paste your code into nothing happen. I tried all other workloads as well, but it never worked. What are the steps I need to do?

Got also that one: "psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory Is the server running locally and accepting connections on that socket?"

Quarzer avatar Mar 03 '25 15:03 Quarzer

@kikendo there was a bug and is now fixed

alextran1502 avatar Mar 03 '25 15:03 alextran1502

I'm having the same issue here but I'm hosting immich in truenas (using the provided version) and I don't have a dedicated container for postgres. how can I get this fixed?

In the TrueNAS-App the Postgres Data Storage is the last volume at the bottom. You can start a shell of the database-volume under the point "Workloads" of the app. There you can ran psql --dbname=immich --username=postgres -c "delete from system_metadata where key like 'memories-state'; truncate table memories cascade;"

Hey dude,

can you help me on that? I can't get it to run. Which workload should I pick? Permissions is the last on my side. When I click on shell that came up: "/bin/sh" When I klick enter I'm in the shell overview and when I paste your code into nothing happen. I tried all other workloads as well, but it never worked. What are the steps I need to do?

Got also that one: "psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory Is the server running locally and accepting connections on that socket?"

it's the same issue I have. There is no postgres container in workloads.

I have 5 containers:

  • server
  • machine-learning
  • pgvecto
  • redis
  • permissions

AndresSalinasB avatar Mar 03 '25 16:03 AndresSalinasB

@AndresSalinasB I think you need to access to the pgvecto container

alextran1502 avatar Mar 03 '25 16:03 alextran1502

@AndresSalinasB I think you need to access to the pgvecto container

Hey,

I tried on that but got "psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: role "postgres" does not exist"

Quarzer avatar Mar 03 '25 17:03 Quarzer

same result for me trying to use the pgvecto container

AndresSalinasB avatar Mar 03 '25 17:03 AndresSalinasB

@AndresSalinasB I think you need to access to the pgvecto container

Hey,

I tried on that but got "psql: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: FATAL: role "postgres" does not exist"

On Truenas SCALE, community app, I used the the command Alex provided but changed the user to immich user in the pgvecto container and it worked.

schmitzkr avatar Mar 03 '25 20:03 schmitzkr

that's the trick! thanks a lot!

for reference I used this command docker exec ix-immich-pgvecto-1 psql --dbname=immich --username=immich -c "delete from system _metadata where key like 'memories-state'; truncate table memories cascade;"

AndresSalinasB avatar Mar 03 '25 20:03 AndresSalinasB

This was working for me in TrueNas SCALE pgvecto container: psql --dbname=immich --username=immich -c "delete from system_metadata where key like 'memories-state'; truncate table memories cascade;"

Quarzer avatar Mar 03 '25 21:03 Quarzer

The suggested solution doesn't work on nixos:

 > psql --dbname=immich --username=postgres -c "delete from system_metadata where key like 'memories-state'; truncate table memories cascade;" 

psql: error: connection to server on socket "/run/postgresql/.s.PGSQL.5432" failed: FATAL:  Peer authentication failed for user "postgres"

I've also tried with --username=immich

cch000 avatar Mar 07 '25 00:03 cch000

@cch000 update to the latest version 1.129.0 also fix this issue

alextran1502 avatar Mar 07 '25 01:03 alextran1502

Great! Thank you

cch000 avatar Mar 07 '25 10:03 cch000

So I've ran the sql query and regenerated the memories but when I do, it ends up duplicating it thrice. I am using the latest versions.

sakibstark11 avatar May 03 '25 12:05 sakibstark11

@sakibstark11 i created a new issue because this issue is closed and really long. https://github.com/immich-app/immich/issues/18215

RogerSik avatar May 11 '25 14:05 RogerSik