kube-prometheus
kube-prometheus copied to clipboard
How can I add prometheus args?
I want add some config to prometheus . like this:
spec:
containers:
- args:
- --web.console.templates=/etc/prometheus/consoles
- --web.console.libraries=/etc/prometheus/console_libraries
- --config.file=/etc/prometheus/config_out/prometheus.env.yaml
- --storage.tsdb.path=/prometheus
- --storage.tsdb.retention.time=24h
- --web.enable-lifecycle
- --storage.tsdb.no-lockfile
- --web.route-prefix=/
```
I find api.md [api,md](https://github.com/prometheus-operator/prometheus-operator/blob/22aaf848a27f6e45702131e22a596778686068d5/Documentation/api.md#prometheusspec)
for example ,I want add 'logLevel' to prometheus ,where should I add? or whice file should I change?
[root@vm10-10-5-48 manifests]# ll
total 1608
-rw-r--r-- 1 root root 384 Mar 18 13:18 alertmanager-alertmanager.yaml
-rw-r--r-- 1 root root 501 Mar 30 16:41 alertmanager-secret.yaml
-rw-r--r-- 1 root root 493 Mar 29 14:50 alertmanager-secret.yaml_bak
-rw-r--r-- 1 root root 96 Mar 18 12:08 alertmanager-serviceAccount.yaml
-rw-r--r-- 1 root root 254 Mar 18 12:08 alertmanager-serviceMonitor.yaml
-rw-r--r-- 1 root root 308 Mar 18 14:46 alertmanager-service.yaml
-rw-r--r-- 1 root root 282 Mar 29 14:32 alertmanager.yaml
-rw-r--r-- 1 root root 550 Mar 18 12:08 grafana-dashboardDatasources.yaml
-rw-r--r-- 1 root root 1336479 Mar 30 16:41 grafana-dashboardDefinitions.yaml
-rw-r--r-- 1 root root 447 Mar 18 12:08 grafana-dashboardSources.yaml
-rw-r--r-- 1 root root 7789 Mar 23 10:59 grafana-deployment.yaml
-rw-r--r-- 1 root root 86 Mar 18 12:02 grafana-serviceAccount.yaml
-rw-r--r-- 1 root root 208 Mar 18 12:08 grafana-serviceMonitor.yaml
-rw-r--r-- 1 root root 238 Mar 18 13:30 grafana-service.yaml
-rw-r--r-- 1 root root 281 Mar 18 12:08 kube-state-metrics-clusterRoleBinding.yaml
-rw-r--r-- 1 root root 1556 Mar 30 16:41 kube-state-metrics-clusterRole.yaml
-rw-r--r-- 1 root root 2159 Mar 30 16:41 kube-state-metrics-deployment.yaml
-rw-r--r-- 1 root root 267 Mar 18 12:08 kube-state-metrics-roleBinding.yaml
-rw-r--r-- 1 root root 423 Mar 18 12:08 kube-state-metrics-role.yaml
-rw-r--r-- 1 root root 97 Mar 18 12:08 kube-state-metrics-serviceAccount.yaml
-rw-r--r-- 1 root root 747 Mar 18 12:08 kube-state-metrics-serviceMonitor.yaml
-rw-r--r-- 1 root root 331 Mar 18 12:08 kube-state-metrics-service.yaml
-rw-r--r-- 1 root root 266 Mar 18 12:08 node-exporter-clusterRoleBinding.yaml
-rw-r--r-- 1 root root 283 Mar 18 12:08 node-exporter-clusterRole.yaml
-rw-r--r-- 1 root root 2561 Mar 30 16:41 node-exporter-daemonset.yaml
-rw-r--r-- 1 root root 92 Mar 18 12:08 node-exporter-serviceAccount.yaml
-rw-r--r-- 1 root root 586 Mar 30 16:41 node-exporter-serviceMonitor.yaml
-rw-r--r-- 1 root root 243 Mar 18 12:08 node-exporter-service.yaml
-rw-r--r-- 1 root root 292 Mar 18 12:08 prometheus-adapter-apiService.yaml
-rw-r--r-- 1 root root 396 Mar 30 16:41 prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml
-rw-r--r-- 1 root root 304 Mar 18 12:08 prometheus-adapter-clusterRoleBindingDelegator.yaml
-rw-r--r-- 1 root root 281 Mar 18 12:08 prometheus-adapter-clusterRoleBinding.yaml
-rw-r--r-- 1 root root 188 Mar 18 12:08 prometheus-adapter-clusterRoleServerResources.yaml
-rw-r--r-- 1 root root 219 Mar 18 12:08 prometheus-adapter-clusterRole.yaml
-rw-r--r-- 1 root root 1279 Mar 18 12:08 prometheus-adapter-configMap.yaml
-rw-r--r-- 1 root root 1327 Mar 18 12:08 prometheus-adapter-deployment.yaml
-rw-r--r-- 1 root root 325 Mar 18 12:08 prometheus-adapter-roleBindingAuthReader.yaml
-rw-r--r-- 1 root root 97 Mar 18 12:08 prometheus-adapter-serviceAccount.yaml
-rw-r--r-- 1 root root 236 Mar 18 12:08 prometheus-adapter-service.yaml
-rw-r--r-- 1 root root 269 Mar 18 12:08 prometheus-clusterRoleBinding.yaml
-rw-r--r-- 1 root root 216 Mar 18 12:08 prometheus-clusterRole.yaml
-rw-r--r-- 1 root root 483 Mar 30 16:41 prometheus-operator-serviceMonitor.yaml
-rw-r--r-- 1 root root 930 Mar 30 17:23 prometheus-prometheus.yaml
-rw-r--r-- 1 root root 293 Mar 18 12:08 prometheus-roleBindingConfig.yaml
-rw-r--r-- 1 root root 983 Mar 18 12:08 prometheus-roleBindingSpecificNamespaces.yaml
-rw-r--r-- 1 root root 188 Mar 18 12:08 prometheus-roleConfig.yaml
-rw-r--r-- 1 root root 820 Mar 18 12:08 prometheus-roleSpecificNamespaces.yaml
-rw-r--r-- 1 root root 65348 Mar 30 16:41 prometheus-rules.yaml
-rw-r--r-- 1 root root 93 Mar 18 12:08 prometheus-serviceAccount.yaml
-rw-r--r-- 1 root root 6829 Mar 30 16:41 prometheus-serviceMonitorApiserver.yaml
-rw-r--r-- 1 root root 395 Mar 18 12:08 prometheus-serviceMonitorCoreDNS.yaml
-rw-r--r-- 1 root root 6172 Mar 30 16:41 prometheus-serviceMonitorKubeControllerManager.yaml
-rw-r--r-- 1 root root 6778 Mar 30 16:41 prometheus-serviceMonitorKubelet.yaml
-rw-r--r-- 1 root root 347 Mar 18 12:08 prometheus-serviceMonitorKubeScheduler.yaml
-rw-r--r-- 1 root root 247 Mar 18 12:08 prometheus-serviceMonitor.yaml
-rw-r--r-- 1 root root 269 Mar 18 13:25 prometheus-service.yaml
drwxr-xr-x 2 root root 4096 Mar 30 17:15 setup
Hey @liaoqiArno 👋 are you using the jsonnet or just manifests from this repo?
As Prometheus Spec, it is a Custom Resource so you would not be setting a flag but a field in the Prometheus Custom Resource. In your case it would be a field called logLevel: debug
, so it would go here right after image
in a new line https://github.com/prometheus-operator/kube-prometheus/blob/main/manifests/prometheus-prometheus.yaml#L20 or in jsonnet it would be here https://github.com/prometheus-operator/kube-prometheus/blob/main/jsonnet/kube-prometheus/components/prometheus.libsonnet#L275.
Hey @liaoqiArno 👋 are you using the jsonnet or just manifests from this repo?
As Prometheus Spec, it is a Custom Resource so you would not be setting a flag but a field in the Prometheus Custom Resource. In your case it would be a field called
logLevel: debug
, so it would go here right afterimage
in a new line https://github.com/prometheus-operator/kube-prometheus/blob/main/manifests/prometheus-prometheus.yaml#L20 or in jsonnet it would be here https://github.com/prometheus-operator/kube-prometheus/blob/main/jsonnet/kube-prometheus/components/prometheus.libsonnet#L275.
Hey lilic, I use this command "kubectl apply -f manifests/*" it's my file:
[root@vm10-10-5-48 manifests] cat prometheus-prometheus.yaml
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
labels:
prometheus: k8s
name: k8s
namespace: monitoring
spec:
alerting:
alertmanagers:
- name: alertmanager-main
namespace: monitoring
port: web
baseImage: quay.io/prometheus/prometheus
nodeSelector:
kubernetes.io/os: linux
podMonitorNamespaceSelector: {}
podMonitorSelector: {}
replicas: 1
resources:
requests:
memory: 400Mi
ruleSelector:
matchLabels:
prometheus: k8s
role: alert-rules
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
serviceAccountName: prometheus-k8s
serviceMonitorNamespaceSelector: {}
serviceMonitorSelector: {}
version: v2.11.0
and I find the statefulsets have some args:
[root@vm10-10-5-48 manifests]# kubectl -n monitoring get statefulsets.apps prometheus-k8s -oyaml
...
spec:
containers:
- args:
- --web.console.templates=/etc/prometheus/consoles
- --web.console.libraries=/etc/prometheus/console_libraries
- --config.file=/etc/prometheus/config_out/prometheus.env.yaml
- --storage.tsdb.path=/prometheus
- --storage.tsdb.retention.time=24h
- --web.enable-lifecycle
- --storage.tsdb.no-lockfile
- --web.route-prefix=/
image: quay.io/prometheus/prometheus:v2.11.0
...
I use the realease is 'release-0.3',it's seem can't add args in prometheus-prometheus.yaml.
If you want to enable exemplars there is a new field spec.enableFeatures
This issue has been automatically marked as stale because it has not had any activity in the last 60 days. Thank you for your contributions.
嘿@liaoqiArno 👋您使用的是 jsonnet 还是只是来自这个 repo 的清单?
普罗米修斯规范,它是一个资源
logLevel: debug
资源行之后立即转移到这里https://github.com/prometheus-operator/kube-prometheus/blob/main/manifests/prometheus-prometheus.yaml#L20或者在jsonnet中,它会在这里https:/ /github .com/prometheus-operator/kube-prometheus/blob/main/jsonnet/kube-prometheus/components/prometheus.libsonnet#L275。image
It add none.
spec:
enableFeatures:
- remote-write-receiver
externalLabels:
web.enable-remote-write-receiver: "true"
result
# kubectl get sts prometheus-k8s -n monitoring -oyaml|grep -A 9 args
- args:
- --web.console.templates=/etc/prometheus/consoles
- --web.console.libraries=/etc/prometheus/console_libraries
- --config.file=/etc/prometheus/config_out/prometheus.env.yaml
- --storage.tsdb.path=/prometheus
- --storage.tsdb.retention.time=60d
- --web.enable-lifecycle
- --storage.tsdb.no-lockfile
- --query.max-concurrency=1000
- --web.route-prefix=/
--
- args:
- --listen-address=:8080
- --reload-url=http://localhost:9090/-/reload
- --config-file=/etc/prometheus/config/prometheus.yaml.gz
- --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml
- --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
This issue has been automatically marked as stale because it has not had any activity in the last 60 days. Thank you for your contributions.
This issue was closed because it has not had any activity in the last 120 days. Please reopen if you feel this is still valid.
I also encountered such a problem. I want to connect to thanos. I recommend storage.tsdb.min-block-duration to be the same size as storage.tsdb.max-block-duration for 2h, but the default prometheus configuration is inconsistent. I configured the following in prometheus.yaml and it doesn't work storage: tsdb: maxBlockDuration: 720h minBlockDuration: 1h compression: snappy retentionTime: 720h
And then I added it via args and it didn't work either args:
- "--storage.tsdb.min-block-duration=2h"
- "--storage.tsdb.max-block-duration=2h"
I looked at the crd file and it looks like the operator may not have an open api for tuning these configurations. How can I do that? Please help to give some suggestions
If you want to enable exemplars there is a new field
spec.enableFeatures
apiVersion: monitoring.coreos.com/v1 kind: Prometheus metadata: labels: prometheus: k8s name: k8s namespace: ops-prod spec: alerting: alertmanagers: - name: alertmanager-main namespace: ops-prod port: web baseImage: 747875099153.dkr.ecr.us-east-1.amazonaws.com/ops-basic/prometheus imagePullSecrets:
- name: registry-secret enableFeatures: storage.tsdb.min-block-duration: 2h storage.tsdb.max-block-duration: 2h
Please take a look at the configuration. I tried to configure it but it doesn't seem to work
嘿@liaoqiArno 👋您使用的是 jsonnet 还是只是来自这个 repo 的清单? 普罗米修斯规范,它是一个资源
logLevel: debug
资源行之后立即转移到这里[https://github.com/prometheus-operator/kube-prometheus/blob/main/manifests/prometheus-prometheus.yaml#L20或者在jsonnet中,它会在这里https:/ /github .com/prometheus-operator/kube-prometheus/blob/main/jsonnet/kube-prometheus/components/prometheus.libsonnet#L275]([https://github.com/prometheus-operator/kube-prometheus/blob/main/jsonnet/kube-prometheus/components/prometheus.libsonnet#L275)。image
It add none.
spec: enableFeatures: - remote-write-receiver externalLabels: web.enable-remote-write-receiver: "true"
result
# kubectl get sts prometheus-k8s -n monitoring -oyaml|grep -A 9 args - args: - --web.console.templates=/etc/prometheus/consoles - --web.console.libraries=/etc/prometheus/console_libraries - --config.file=/etc/prometheus/config_out/prometheus.env.yaml - --storage.tsdb.path=/prometheus - --storage.tsdb.retention.time=60d - --web.enable-lifecycle - --storage.tsdb.no-lockfile - --query.max-concurrency=1000 - --web.route-prefix=/ -- - args: - --listen-address=:8080 - --reload-url=http://localhost:9090/-/reload - --config-file=/etc/prometheus/config/prometheus.yaml.gz - --config-envsubst-file=/etc/prometheus/config_out/prometheus.env.yaml - --watched-dir=/etc/prometheus/rules/prometheus-k8s-rulefiles-0
For people who only wants to enable remote write, please utilize enableRemoteWriteReceiver: true