percona-postgresql-operator
percona-postgresql-operator copied to clipboard
version 2.3.1 doesn't support postgresql 16?
Report
pg-operator seems to not be compatible with postgresql 16 which pg-db is using by default?
More about the problem
2024-01-26T20:43:43.620Z ERROR Reconciler error {"controller": "perconapgcluster", "controllerGroup": "pgv2.percona.com", "controllerKind": "PerconaPGCluster", "PerconaPGCluster": {"name":"name-master","namespace":"name"}, "namespace": "name", "name": "name-master", "reconcileID": "b54a138a-b749-43a3-8550-f3f86bbe0cb6", "error": "update/create PostgresCluster: PostgresCluster.postgres-operator.crunchydata.com \"name-master\" is invalid: spec.postgresVersion: Invalid value: 16: spec.postgresVersion in body should be less than or equal to 15", "errorVerbose": "PostgresCluster.postgres-operator.crunchydata.com \"name-master\" is invalid: spec.postgresVersion: Invalid value: 16: spec.postgresVersion in body should be less than or equal to 15\nupdate/create PostgresCluster\ngithub.com/percona/percona-postgresql-operator/percona/controller/pgcluster.(*PGClusterReconciler).Reconcile\n\t/go/src/github.com/percona/percona-postgresql-operator/percona/controller/pgcluster/controller.go:241\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:119\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:316\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2\n\t/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:1650"}
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:266
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2
/go/pkg/mod/sigs.k8s.io/[email protected]/pkg/internal/controller/controller.go:227
Steps to reproduce
- install pg-operator helm chart (only configuration is to enable watchAllNamespaces)
- install pg-db helm chart (no configuration)
- cluster is not created and observe errors in logs
Versions
- Kubernetes v1.28.5+k3s1
- Operator 1.3.1
- Database 16
- pg-operator chart 2.3.3
- pg-db chart 2.3.2
Anything else?
running:
Image: registry-1.percona.com/percona/percona-postgresql-operator:2.3.1
Image ID: registry-1.percona.com/percona/percona-postgresql-operator@sha256:a6495c8e13d9fe3f50df12219e9d9cf64fa610fe5680a0a78d0e5c4fb3be2456
also, this says support for 16.x was added in 2.3.0 so 😕
https://docs.percona.com/percona-operator-for-postgresql/2.0/ReleaseNotes/Kubernetes-Operator-for-PostgreSQL-RN2.3.0.html
Hello @AdamJacobMuller.
I was not able to reproduce it.
- Deployed the operator from a helm chart enabling
watchAllNamespaces - Deployed the default cluster from the chart.
Cluster is up and running with PG16.
Operator 1.3.1
what is it?
Operator 1.3.1 what is it?
Hi,
Sorry about that, I meant operator image 2.3.1
Looking through the code, I'm not sure exactly where the source for that maximum version check is (I saw where it does the check, but, I can't understand where it gets max=15) from. I assume it can't be from the code directly because I couldn't see how the 2.3.1 operator works for you but not me.
Is it reading the maximum supported version from the CRD or something and perhaps I have a stale version of that?
Hi,
Testing this out more, I have a very simple reproducer...
Also did the exact same steps on another cluster (which has never had percona postgres operator installed) with identical results.
@AdamJacobMuller thanks! Still can't reproduce it. Could you please run and show the output of helm search repo percona to check on the versions?
@tplavcic any thoughts?
# helm repo list |grep percona
percona https://percona.github.io/percona-helm-charts/
# helm search repo percona
NAME CHART VERSION APP VERSION DESCRIPTION
stable/percona 1.2.3 5.7.26 DEPRECATED - free, fully compatible, enhanced, ...
stable/percona-xtradb-cluster 1.0.8 5.7.19 DEPRECATED - free, fully compatible, enhanced, ...
percona/pg-db 2.3.4 2.3.1 A Helm chart to deploy the PostgreSQL database ...
percona/pg-operator 2.3.3 2.3.1 A Helm chart to deploy the Percona Operator for...
percona/pmm 1.3.10 2.41.1 A Helm chart for Percona Monitoring and Managem...
percona/ps-db 0.6.5 0.6.0 A Helm chart for installing Percona Server Data...
percona/ps-operator 0.6.1 0.6.0 A Helm chart for Deploying the Percona Operator...
percona/psmdb-db 1.15.3 1.15.0 A Helm chart for installing Percona Server Mong...
percona/psmdb-operator 1.15.2 1.15.0 A Helm chart for deploying the Percona Operator...
percona/pxc-db 1.13.6 1.13.0 A Helm chart for installing Percona XtraDB Clus...
percona/pxc-operator 1.13.5 1.13.0 A Helm chart for deploying the Percona Operator...
#
Dumb question, if this is an upgrade have you upgraded the CRD during setup ? IIRC this is not done by default
no, I can reproduce this on a fresh cluster, i also removed (all I think) relevant CRDs, i put that in the screenshot above
Actually, I had a similar issue where I had a CRD from crunchydata... Percona uses the CR "postgresclusters.postgres-operator.crunchydata.com". While all the other CRDs are separated, this one can conflict with percona. This isn't ideal, especially if you want to move from crunchy to percona - and you want to do it in parallel - this can cause a whole lot of issues if not considered correctly. I'd suggest changing the CRD name for postgresclusters.postgres-operator.percona or something similar so we could have both operators working in parallel while migrating. A side note, If you have running clusters with crunchydata or other that uses this CDR, do NOT remove this CRD as it will most likely delete all clusters with it.