monitoring
monitoring copied to clipboard
impossible to deploy the chart in multiple namespaces
here my usecase :
I'm on premise (no-cloud access). I have a cluster that will be use for dev, qa, preprod...
I will deploy all my applications in different namespaces like dev,qa...
My application need to pass in dev before going to qa and qa before preprod..
so I'll have to deploy the monitoring stack in dev and qa and preprod.
example :
- dev : monitoring-0.22
- qa : monitoring-0.21
- preprod: monitoring-0.20
but it's not possible right now because so artifacts in prometheus-stack are using clusterrole.
I try to fix that by deactivate clusterrole and switch to role instead, but at the end, I obtain this error
PS C:\workspace\bidgroup\iep\cicd\dev\monitoring-stack> helm --kube-context cluster109 -n default install monitoring-stack .
Error: template: monitoring-stack/charts/monitoring-stack/charts/kube-state-metrics/templates/rolebinding.yaml:2:22: executing "monitoring-stack/charts/monitoring-stack/charts/kube-state-metrics/templates/rolebinding.yaml" at <$.Values.namespaces>: wrong type for value; expected string; got interface {}
PS C:\workspace\bidgroup\iep\cicd\dev\monitoring-stack>
so if I want to use roles in kube-metrics, I will have to force the names of the namespaces that I'll use. It won't work in my case, If I want to use gitops and CICD in my cluster.
Solutions could be :
- Open a PR in Prometheus-stack to add this possibility,
- disable the rbac creation and create our own roles
root@test-pcl4014:~# helm install monitoring openebs-monitoring/openebs-monitoring
NAME: monitoring
LAST DEPLOYED: Tue Jul 6 07:25:56 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
OpenEBS monitoring has been installed.
Check its status by running:
$ kubectl get pods -n default -o wide
Use `kubectl get svc -n default` to list all the
services in the `default` namespace.
To access the dashboards, form the Grafana URL and open it in the browser
export NODE_PORT=32515
export NODE_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=grafana,app.kubernetes.io/instance=monitoring" -o jsonpath="{.items[0].spec.nodeName}")
export NODE_IP=$(kubectl get node $NODE_NAME -o jsonpath='{$.status.addresses[?(@.type=="ExternalIP")].address}')
echo http://$NODE_IP:$NODE_PORT
NOTE: The above IP should be a public IP
For more information, visit our Slack at https://openebs.io/community
root@test-pcl4014:~# helm install monitoring openebs-monitoring/openebs-monitoring -n dev
Error: rendered manifests contain a resource that already exists. Unable to continue with install: PodSecurityPolicy "monitoring-grafana" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "dev": current value is "default"
root@test-pcl4014:~#
here a little example of what I try
grafana:
rbac:
namespaced: true
pspEnabled: false
pspUseAppArmor: false
prometheus-node-exporter:
rbac:
namespaced: true
pspEnabled: false
pspUseAppArmor: false
prometheus:
rbac:
namespaced: true
pspEnabled: false
pspUseAppArmor: false
server:
podSecurityPolicy:
enabled: false
kube-state-metrics:
rbac:
useClusterRole: false
podSecurityPolicy:
enabled: false
kubeStateMetrics:
rbac:
useClusterRole: false
podSecurityPolicy:
enabled: false
using Prometheus Operator make it almost impossible. If we want to install it in multiple namespaces, we need to disable rbac creation and create all the role/rolebinding manually (in templates) which is troublesome.
Marking this as out-of-scope since the changes are needed in Prometheus to support this.