Bitnami Legacy
Heads up, have you heard about Bitnami charging for images? It's slightly confusing, but I think the image references need to be updated to point to bitnamilegacy.
I've just checked the Gitlab helm chart repository (good reference point) and they have updated their references.
https://github.com/bitnami/charts/issues/35164
IMHO migrating to bitnamilegacy is just a short term solution. It will make sure that current code using tagged releases from bitnami continues to work, however there won't be any new releases (only the latest tag, which is hardly feasible).
So ultimately I believe this about migrating away from bitnami images altogether.
Yeah...There's not really an easy way forward here. I've migrated my home-ops repo to deploy the official Docker images using bjw-s's app-template chart. Most of the "magic" of these charts was handing config between Bitnami's charts, so a lot of chart changes would need to happen, then everyone's instances would probably need to be wiped and recreated 🙁 I've started using app-template for everything and just haven't had the time to keep these charts up to date lately unfortunately
Yeah it's a big change, and I don't blame you for not having time.
@gabe565 i totally understand this. We had the same problem, and "choose" to become a paying bitnami customer. Moving forward I'm not sure how bitnami charts can be dependencies in other public available charts anymore. So from my perspective it would also be okay, if the dependencies get removed and your chart requires an external database or the dependencies are replaced with other pulbic available ones. This contradicts the idea of helm charts, but with bitnamis policy I don't see another possibility.
For us I think I will migrate from using the included dependency to a self managed external db.
Given that these charts (both Bitnami and this repo) are homelab-focused, I think migrating to bitnami-legacy makes sense.
As far as I understand this will just freeze the underlying database to a certain version/configuration.
At least in my own homelab, most of my apps don't depend on many new features of databases, and mostly just care that there is a semi-recent version of Postgres/MariaDB/SQLite to store persistence. I would expect that most of my applications I'm running (with the exception of Immich, which I've already migrated to it's own custom DB) would continue to work for ~3-5 years at least with an older version of an existing database.
That should give most people plenty of time to migrate to a simple external self-hosted database, I would think.
Am I missing anything? I think this should be fine long term. At some point we could make a major version bump in the helm chart versions that requires an external database to be defined. I'd be happy to help with that for a couple of the applications I use.
Because of the hostile behaviour of Bitnami I am not ready to keep anything with runtime dependencies on them running so I went down another route:
- database migration to cloundnative-pg (using the cnpg-operator v0.26.0)
- disabled postgres in values.yaml (postgresql.enabled=false)
- updated env to reflect the changes (set the DB_ENGINE, import POSTGRES_* from the secret created by cloudnative-pg.
Here are the details if anyone is interested.
Cloud Native PG database (with bootstrap section for the database import):
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: cluster-tandoor
spec:
instances: 1
bootstrap:
initdb:
import:
type: microservice
databases:
- tandoor
source:
externalCluster: cluster-tandoor-legacy
storage:
size: 8Gi
# plugins:
# - name: barman-cloud.cloudnative-pg.io
# isWALArchiver: true
# parameters:
# barmanObjectName: minio-store
externalClusters:
- name: cluster-tandoor-legacy
connectionParameters:
# Use the correct IP or host name for the source database
host: tandoor-postgresql.tandoor.svc.cluster.local
user: postgres
dbname: tandoor
password:
name: tandoor-postgresql
key: postgres-password
New values.yaml
...
env:
TZ: Europe/Berlin
SECRET_KEY: YuHD35FSCkebIYOI
POSTGRES_HOST:
valueFrom:
secretKeyRef:
key: host
name: cluster-tandoor-app
POSTGRES_PASSWORD:
valueFrom:
secretKeyRef:
key: password
name: cluster-tandoor-app
POSTGRES_USER:
valueFrom:
secretKeyRef:
key: username
name: cluster-tandoor-app
POSTGRES_DB: app
DB_ENGINE: django.db.backends.postgresql
...
postgresql:
enabled: false
Asked my questions before trying to work through this on my own, and I was able to get it. The cloudnative-pg approach is pretty good, and I have the chart deployed in my test cluster this way.
Migrating an existing database - some clearer direction on how to manage that would be helpful.
Adding a note here about bookstack. It doesn't support postgresql, but a similar approach can work, as there is a mariadb-operator chart that provides something similar to cloudnative-pg, but just not quite as slick (and thus not as simple a replacement.
You have to install the mariadb-operator and mariadb-operator-crd charts. I used the defaults in my testing, with the exception of enabling the use of cert-manager.
The minimum new manifests needed to get the database working (fresh install) was this:
---
apiVersion: k8s.mariadb.com/v1alpha1
kind: MariaDB
metadata:
name: mariadb-bookstack
spec:
rootPasswordSecretKeyRef:
name: mariadb-bookstack
key: root-password
username: bookstack
passwordSecretKeyRef:
name: mariadb-bookstack
key: password
database: app
storage:
size: 1Gi
replicas: 1
galera:
enabled: false #this is for if you want to create a cluster and scale up your replicas
metrics:
enabled: true
---
apiVersion: k8s.mariadb.com/v1alpha1
kind: Grant
metadata:
name: grant
spec:
mariaDbRef:
name: mariadb-bookstack
privileges:
- "ALL PRIVILEGES"
database: "app"
table: "*"
username: bookstack
grantOption: true
host: "%"
# Delete the resource in the database whenever the CR gets deleted.
# Alternatively, you can specify Skip in order to omit deletion.
cleanupPolicy: Delete
requeueInterval: 10h
retryInterval: 30s
As you might have surmised, I had to create a secret with "password" and "root-password" in it. I was getting errors connecting to the DB despite being able to connect when exec-ing into the bookstack pod, but adding the Grant manifest cleared that up. Didn't think to note what the default permissions were for the account on the db.
Finally, the changes to helmrelease.yaml:
env:
TZ: 'America/New_York'
DB_HOST: mariadb-bookstack
DB_PASSWORD:
valueFrom:
secretKeyRef:
key: password
name: mariadb-bookstack
DB_USERNAME: bookstack
DB_DATABASE: app
DB_PORT: 3306
mariadb:
enabled: false
That got a fresh install up and running. Adjust as needed for your environment, or to migrate your existing DB during init (the mariadb-operator docs has some examples for this). There is a bit of documentation about how to set up backups and all kinds of other goodies, but this is I think a minimal setup here.
To use Bitnamilegacy use the following values:
global:
# Changes needed for Redis and PostgreSQL
security:
allowInsecureImages: true
postgresql:
image:
repository: bitnamilegacy/postgresql
redis:
image:
repository: bitnamilegacy/redis