k3s: Seeded, migrated - but, not starting.
Hello there!
It was quite a bit of a dance to get this to launch properly; but, I did... almost. After applying the seeding and then alterations, the app still believes that not everything has been applied; although this is literally the exact same image.
Here are the logs:
> kubectl logs -f -n logto deployment.apps/logto-sys
Defaulted container "app" out of: app, db
> cli
> logto db seed --swe
info Seeding skipped
> alteration
> logto db alt deploy latest
info Found 0 alteration to deploy
> cli
> logto connector add --official
- Fetch official connector list
info ✔ Fetch official connector list
info Found 26 official connectors
info Fetch connector metadata
info ✔ Added @logto/connector-aws-ses v1.1.1
info ✔ Added @logto/connector-apple v1.3.0
info ✔ Added @logto/connector-facebook v1.3.0
info ✔ Added @logto/connector-oidc v1.2.0
info ✔ Added @logto/connector-mailgun v1.2.1
info ✔ Added @logto/connector-github v1.3.0
info ✔ Added @logto/connector-smsaero v1.2.1
info ✔ Added @logto/connector-google v1.3.0
info ✔ Added @logto/connector-discord v1.3.0
info ✔ Added @logto/connector-smtp v1.1.2
info ✔ Added @logto/connector-azuread v1.2.0
info ✔ Added @logto/connector-naver v1.2.0
info ✔ Added @logto/connector-oauth v1.2.0
info ✔ Added @logto/connector-kakao v1.2.0
info ✔ Added @logto/connector-aliyun-dm v1.1.1
info ✔ Added @logto/connector-wechat-web v1.3.0
info ✔ Added @logto/connector-aliyun-sms v1.1.1
info ✔ Added @logto/connector-twilio-sms v1.1.1
info ✔ Added @logto/connector-tencent-sms v1.1.1
info ✔ Added @logto/connector-feishu-web v1.2.0
info ✔ Added @logto/connector-sendgrid-email v1.1.1
info ✔ Added @logto/connector-alipay-web v1.3.0
info ✔ Added @logto/connector-alipay-native v1.2.0
info ✔ Added @logto/connector-wechat-native v1.2.0
info ✔ Added @logto/connector-wecom v0.2.0
info ✔ Added @logto/connector-saml v1.1.1
info Finished
info Restart your Logto instance to get the changes reflected.
> start
> cd packages/core && NODE_ENV=production node .
(node:385) ExperimentalWarning: Importing JSON modules is an experimental feature and might change at any time
(Use `node --trace-warnings ...` to show where the warning was created)
warn [CACHE] No Redis client initialized, skipping
error Error while initializing app:
error Error: Row-level security has to be enforced on EVERY business table when starting Logto.
Found following table(s) without RLS: _logto_configs
Did you forget to run `npm cli db alteration deploy`?
at checkRowLevelSecurity (file:///etc/logto/packages/core/build/tenants/utils.js:47:15)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async Promise.all (index 3)
at async file:///etc/logto/packages/core/build/index.js:24:5
And, the whole deployment (it is very long.)
Deployment YAML
apiVersion: v1
kind: Namespace
metadata:
name: logto
---
apiVersion: v1
kind: Secret
metadata:
name: psql-creds
namespace: logto
type: Opaque
#data:
stringData:
POSTGRES_USER: <snip>
POSTGRES_PASSWORD: <snip>
POSTGRES_DB: <snip>
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logto-env
namespace: logto
data:
# kubectl port-forward -n logto deployments/logto-sys -c app 3002
# easier way tho... ?
ADMIN_ENDPOINT: http://localhost:3002
ENDPOINT: https://auth.birb.it
TRUST_PROXY_HEADER: "true"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logto-startup
namespace: logto
data:
start.sh: |-
#!/bin/sh
set -eu
export DB_URL=postgres://${POSTGRES_USER}:${POSTGRES_PASSWORD}@localhost/${POSTGRES_DB}
export CI=true
npm run cli db seed -- --swe
npm run alteration deploy latest
npm run cli connector add -- --official
npm run start
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: psql-pvc
namespace: logto
spec:
storageClassName: nfs-bunker
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: logto-sys
namespace: logto
spec:
selector:
matchLabels:
app: logto
template:
metadata:
labels:
app: logto
spec:
volumes:
- name: logto-connectors-vol
emptyDir: {}
- name: psql-vol
persistentVolumeClaim:
claimName: psql-pvc
- name: logto-startup-vol
configMap:
name: logto-startup
containers:
- name: app
image: ghcr.io/logto-io/logto
command:
- /bin/sh
- /opt/start.sh
envFrom:
- configMapRef: { name: logto-env }
- secretRef: { name: psql-creds }
volumeMounts:
- name: logto-connectors-vol
mountPath: /etc/logto/packages/core/connectors
- name: logto-startup-vol
mountPath: /opt
ports:
- name: http-admin
containerPort: 3002
protocol: TCP
- name: http-auth
containerPort: 3001
protocol: TCP
- name: db
image: postgres:16-alpine
envFrom:
- secretRef: { name: psql-creds }
volumeMounts:
- name: psql-vol
mountPath: /var/lib/postgresql/data
---
apiVersion: v1
kind: Service
metadata:
name: logto-svc
namespace: logto
spec:
type: ClusterIP
ports:
- targetPort: http-auth
name: http-auth
port: 3001
selector:
app: logto
---
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: logto-tr
namespace: logto
spec:
entryPoints:
- websecure
routes:
- match: Host(`auth.birb.it`)
kind: Rule
services:
- name: logto-svc
port: http-auth
scheme: http
passHostHeader: true
The seeding process is skipped. Looks like you are using a legacy logto DB instance. You will need a brand new empty DB for the seeding.
Flattened the whole deployment and fixated on :1.15 tag. Well, now it works! I had to edit my "init script" too, however, as logto was too fast, compared to my Postgres; would be neat to have a "retry" feature for the startup to counteract slow DBs (mine is stored on a NAS shared between all my nodes but it isn't particularily fast...).
Thanks for the pointer! ^^