memos
memos copied to clipboard
Docker version with pgsql fails on image update
Describe the bug
I'm running neosmemo/memos@latest on my Synology NAS with DSM 7.2.2 as part of a container project including postgres. It works great typically but every time I update the image it dies and needs to be completely set up again.
After the most recent update it won't start yet again, and this is what's in the log
2024/10/31 05:39:43 WARN failed to find migration history in pre-migrate error="dial tcp 143.244.220.150:5432: connect: connection timed out"
That IP address isn't mine; appears to be a DigitalOcean IP, That maybe has a simple explanation but certainly warrants one imo.
Steps to reproduce
Here's my compose file in case it's useful, i removed other containers that share the postgres container that work fine when they are updated.
version: '3.9'
services:
db:
container_name: postgresql
image: postgres
mem_limit: 256m
cpu_shares: 768
healthcheck:
test: ["CMD", "pg_isready", "-q", "-d", "postgres", "-U", "postgres"]
networks:
- postgres-network
user: "1026:100"
volumes:
- /volume1/docker/postgresql/data:/var/lib/postgresql/data:rw
ports:
- 2665:5432
restart: on-failure:5
memos:
container_name: memos
image: neosmemo/memos:latest
depends_on:
- db
environment:
- MEMOS_DRIVER=postgres
- MEMOS_DSN=user=memos password=<REDACTED> dbname=memosdb host=db sslmode=disable
- TZ=America/Chicago
ports:
- 5230:5230
networks:
- postgres-network
restart: on-failure:5
networks:
postgres-network:
driver: bridge
The version of Memos you're using.
v0.23.0
Screenshots or additional context
memos date,stream,content 2024/10/31 05:45:17,stderr,2024/10/31 05:45:17 ERROR failed to migrate error="dial tcp 143.244.220.150:5432: connect: connection timed out\nfailed to start transaction\ngithub.com/usememos/memos/store.(*Store).preMigrate\n\t/backend-build/store/migrator.go:140\ngithub.com/usememos/memos/store.(*Store).Migrate\n\t/backend-build/store/migrator.go:38\nmain.init.func1\n\t/backend-build/bin/memos/main.go:61\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:989\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:1041\nmain.main\n\t/backend-build/bin/memos/main.go:171\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700\nfailed to pre-migrate\ngithub.com/usememos/memos/store.(*Store).Migrate\n\t/backend-build/store/migrator.go:39\nmain.init.func1\n\t/backend-build/bin/memos/main.go:61\ngithub.com/spf13/cobra.(*Command).execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:989\ngithub.com/spf13/cobra.(*Command).ExecuteC\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:1117\ngithub.com/spf13/cobra.(*Command).Execute\n\t/go/pkg/mod/github.com/spf13/[email protected]/command.go:1041\nmain.main\n\t/backend-build/bin/memos/main.go:171\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:272\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1700"
2024/10/31 05:43:10,stderr,2024/10/31 05:43:10 WARN failed to find migration history in pre-migrate error="dial tcp 143.244.220.150:5432: connect: connection timed out"
2024/10/31 05:39:43,stderr,2024/10/31 05:39:43 WARN failed to find migration history in pre-migrate error="dial tcp 143.244.220.150:5432: connect: connection timed out"
I'm not very educated in this area and may be misinterpreting what I'm reading, but this looks to me like a possible bug in the migration code.
I can recover my container by starting it up again in default (localdb) configuration, then migrating to pgsql again. But if I've already provided pgsql settings that function (eg. I have migrated before) it fails in this "pre-migrate" step after image update.
Someone on reddit suggested the weird IP could be an AT&T DNS server, which if so alleviates that part of the concern.
2024/10/31 05:39:43 WARN failed to find migration history in pre-migrate
This is a warning and does not affect the starting progress.
2024/10/31 05:45:17,stderr,2024/10/31 05:45:17 ERROR failed to migrate error="dial tcp 143.244.220.150:5432: connect: connection timed out\nfailed to start transaction
This one is the main reason why it failed to run, because it can't connect to your database.
This one is the main reason why it failed to run, because it can't connect to your database.
I fully agree that's the problem, or at least part of the problem. But why?
- Every other container in my project starts fully and can connect to the database fine.
- I've tested the connection this is configured to use, and it works.
- I've verified that the database is there and visible to that user with all appropriate rights.
And, to me, this is the kicker -- if I follow these steps, memos also works with pgsql just fine - UNTIL the next image update, when I must repeat these steps again:
- Comment out the
depends_on,environment, andnetworkssections of the memos section of the compose file - Start memos container with a local db (starts fine)
- Stop the container project
- Uncomment the sections I commented out before
- Start the container project
- Memos works fine again using pgsql.
2024/11/02 03:48:54,stderr,2024/11/02 03:48:54 INFO end migrate
2024/11/02 03:48:54,stderr,2024/11/02 03:48:54 INFO start migration currentSchemaVersion=0.22.4 targetSchemaVersion=0.23.1
It's as if the migration of database schema from each version needs to be run by switching back to localdb first, which seems very odd.
MYsql 数据库同样是在升级时失败
Issue is not in English. It has been translated automatically.
The MYsql database also failed during upgrade
In case this helps identify the problem, I have a little more information:
After a recent DSM update I had to restart my NAS (rare) and memos container refused to start, with the same "migrate" reason as above, even though I hadn't updated the memos container image this time -- it's already on latest.
Following the steps I mentioned in my comment above resolved the problem again. So the issue still seems (from a user point of view) to be related to startup migration code, but doesn't seem to always be due to an image update.
MySQL also failed.
@davidtavarez , I believe the MySQL failure on upgrade might relate to a different issue.
Can you have a look at this issue and see if your scenario aligns with it https://github.com/usememos/memos/issues/4127
This annoying issue still persists on latest. Please reopen this!
Have you tried using the docker compose file in the docs? And if that works build from there?
This annoying issue still persists on
latest. Please reopen this!
Try this:
volumes:
mysql_data:
memos_data:
networks:
diet_network:
services:
db:
image: mysql:8
container_name: mysql
hostname: mysql
restart: unless-stopped
environment:
TZ: America/New_York
MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
expose:
- 3306
volumes:
- mysql_data:/var/lib/mysql
command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci']
networks:
- diet_network
adminer:
image: ghcr.io/shyim/adminerevo:latest
container_name: adminer
hostname: adminer
restart: unless-stopped
depends_on:
- db
ports:
- 8090:8080
environment:
TZ: America/New_Yorkn
ADMINER_DEFAULT_DRIVER: mysql
ADMINER_DEFAULT_SERVER: mysql
networks:
- diet_network
memos:
image: neosmemo/memos:latest
container_name: memos
hostname: memos
restart: always
depends_on:
- db
healthcheck:
test: wget --no-verbose --tries=1 --spider http://localhost:5230/
volumes:
- memos_data:/var/opt/memos
environment:
TZ: America/New_York
MEMOS_DRIVER: mysql
MEMOS_DSN: 'memos:${MEMOS_DB_PASSWORD}@tcp(mysql)/memos'
ports:
- 5230:5230
networks:
- diet_network
@RoccoSmit there aren't any significant differences between the docs version and what I posted in the original issue, other than the fact that it uses stable instead of latest and some variations on the database container, which is working fine for every other container that uses pgsql in the same network.
@davidtavarez thanks for weighing in, but I'm using pgsql not mysql. Though some folks above reported they may have been having the same problems with that.
MYSQL迁移成功了.
Issue is not in English. It has been translated automatically.
MYSQL migration was successful.