charts icon indicating copy to clipboard operation
charts copied to clipboard

feat(monitoring): add support for additional labels in PodMonitor configuration

Open Dashing-Nelson opened this issue 5 months ago • 1 comments

This pull request introduces support for adding custom labels to the generated PodMonitor resource in the cluster Helm chart, making it easier to integrate with various monitoring stacks and workflows. Additionally, the logic for PodMonitor and its relabeling configuration has been refactored to improve maintainability, with the relevant configuration now handled in a dedicated template.

PodMonitor labels support and configuration refactoring:

  • Added a new labels field under cluster.monitoring.podMonitor in values.yaml, values.schema.json, and documented it in README.md, allowing users to specify additional labels for the PodMonitor resource. [1] [2] [3]
  • Updated example and test manifests to demonstrate usage of the new labels field for PodMonitor, including sample team and environment labels. [1] [2] [3]

Refactoring PodMonitor resource generation:

  • Moved PodMonitor resource creation and relabeling configuration to a new dedicated template file podmonitor.yaml, improving separation of concerns and maintainability.
  • Removed PodMonitor relabeling and metricRelabeling configuration from cluster.yaml, centralizing this logic in the new template.
  • Updated PodMonitor enablement logic in cluster.yaml to defer resource creation to the new template, setting enablePodMonitor to false in the main spec.

closes: https://github.com/cloudnative-pg/charts/issues/647

Dashing-Nelson avatar Nov 13 '25 10:11 Dashing-Nelson

Any chance PR will be merged?

langovoi avatar Jan 08 '26 13:01 langovoi

@itay-grudev could you please take a look at this PR when you have a moment?

jessebot avatar Jan 23 '26 07:01 jessebot

@Dashing-Nelson I'll fix the test failure.

itay-grudev avatar Mar 08 '26 20:03 itay-grudev

Upgrading with monitoring already enabled would cause:

helm upgrade --install database \
  --namespace database \
  --create-namespace \
  --reuse-values \
  ~/Projects/oss/cnpg/charts/charts/cluster
Error: UPGRADE FAILED: Unable to continue with update: PodMonitor "database-cluster" in namespace "database" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "database"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "database"

Which makes sense. Even though we set enablePodMonitor: false, the operator doesn't remove it faster than Helm attempts to apply the new helm-managed PodMonitor.

My only solution then would be adding a note in the the release log that you need set --set cluster.monitoring.enabled=false when upgrading, then re-enable it, after the upgrade.

Before I merge this, does anyone have a better idea on how to solve the upgrade problem?

itay-grudev avatar Mar 08 '26 21:03 itay-grudev

Upgrading with monitoring already enabled would cause:

helm upgrade --install database \
  --namespace database \
  --create-namespace \
  --reuse-values \
  ~/Projects/oss/cnpg/charts/charts/cluster
Error: UPGRADE FAILED: Unable to continue with update: PodMonitor "database-cluster" in namespace "database" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "database"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "database"

Which makes sense. Even though we set enablePodMonitor: false, the operator doesn't remove it faster than Helm attempts to apply the new helm-managed PodMonitor.

My only solution then would be adding a note in the the release log that you need set --set cluster.monitoring.enabled=false when upgrading, then re-enable it, after the upgrade.

Before I merge this, does anyone have a better idea on how to solve the upgrade problem?

@itay-grudev I can think of:

Pre-upgrade hook to delete the PodMonitor: Only downside I see is that there is a need for RBAC for the hook to delete PodMonitors. Added complexity, i am afraid or maybe we could instead of deleting, the hook patches the existing operator-managed PodMonitor with the required Helm labels/annotations so Helm can adopt it. This could arguably be the cleanest solution. WDYT

Dashing-Nelson avatar Mar 09 '26 15:03 Dashing-Nelson

I was missing the rather obvious solution - to rename it.

itay-grudev avatar Mar 12 '26 21:03 itay-grudev