kube-prometheus
kube-prometheus copied to clipboard
Implement configmap-reloader for prometheus-adapter
What happened?
After deploying kube-prometheus, I want to try HPA, and use deploy.sh to setup custom-metrics-api, but apiservice v1beta1.custom.metrics.k8s.io fails to contact prometheus-adapter service.
kubectl describe apiservice v1beta1.custom.metrics.k8s.io
Name: v1beta1.custom.metrics.k8s.io
Namespace:
Labels:
Did you expect to see some different? It should show message "all checks passed".
How to reproduce it (as minimally and precisely as possible): After deploying the kube-prometheus, run the deploy.sh under experimental/custom-metrics-api
Environment
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:54Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/arm64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.3", GitCommit:"2d3c76f9091b6bec110a5e63777c332469e0cba2", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:50Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/arm64"}
-
Prometheus Operator version:
Insert image tag or Git SHA here
Name: prometheus-operator
Namespace: monitoring
CreationTimestamp: Mon, 28 Oct 2019 11:04:03 +0800
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/name=prometheus-operator
app.kubernetes.io/version=v0.33.0
Annotations: deployment.kubernetes.io/revision: 2
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"apps/v1","kind":"Deployment","metadata":{"annotations":{},"labels":{"app.kubernetes.io/component":"controller","app.kuberne...
Selector: app.kubernetes.io/component=controller,app.kubernetes.io/name=prometheus-operator
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app.kubernetes.io/component=controller
app.kubernetes.io/name=prometheus-operator
app.kubernetes.io/version=v0.33.0
Service Account: prometheus-operator
Containers:
prometheus-operator:
Image: quay.io/coreos/prometheus-operator:v0.33.0
Port: 8080/TCP
Host Port: 0/TCP
Args:
--kubelet-service=kube-system/kubelet
--logtostderr=true
--config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1
--prometheus-config-reloader=quay.io/coreos/prometheus-config-reloader:v0.33.0
Limits:
cpu: 200m
memory: 500Mi
Requests:
cpu: 100m
memory: 300Mi
Environment:
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets:
-
Kubernetes version information:
kubectl version
-
Kubernetes cluster kind:
insert how you created your cluster: kops, bootkube, tectonic-installer, etc. kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.15.3
-
Manifests: just the same as the ones on master branch.
insert manifests relevant to the issue
- Prometheus Operator Logs:
Insert Prometheus Operator logs relevant to the issue here
- Prometheus Logs:
Insert Prometheus logs relevant to the issue here
Anything else we need to know?: kubectl get apiservice .... v1beta1.custom.metrics.k8s.io monitoring/prometheus-adapter False (FailedDiscoveryCheck) 24m v1beta1.events.k8s.io Local True 14d v1beta1.extensions Local True 14d v1beta1.metrics.k8s.io monitoring/prometheus-adapter True 12d ....
kubectl get --raw /apis/custom-metrics.metrics.k8s.io/v1beta1 Error from server (NotFound): the server could not find the requested resource
kubectl get -v=8 --raw /api/v1/namespaces/monitoring/services/https:prometheus-adapter:443/proxy/ I1109 16:00:57.797371 29808 loader.go:359] Config loaded from file: /etc/kubernetes/admin.conf I1109 16:00:57.798212 29808 round_trippers.go:416] GET https://10.4.5.117:6443/api/v1/namespaces/monitoring/services/https:prometheus-adapter:443/proxy/ I1109 16:00:57.798233 29808 round_trippers.go:423] Request Headers: I1109 16:00:57.798245 29808 round_trippers.go:426] Accept: application/json, / I1109 16:00:57.798257 29808 round_trippers.go:426] User-Agent: kubectl/v1.15.3 (linux/arm64) kubernetes/2d3c76f I1109 16:00:57.821251 29808 round_trippers.go:441] Response Status: 200 OK in 22 milliseconds I1109 16:00:57.821293 29808 round_trippers.go:444] Response Headers: I1109 16:00:57.821307 29808 round_trippers.go:447] Date: Sat, 09 Nov 2019 08:00:57 GMT I1109 16:00:57.821319 29808 round_trippers.go:447] Content-Length: 229 I1109 16:00:57.821331 29808 round_trippers.go:447] Content-Type: application/json { "paths": [ "/apis", "/apis/metrics.k8s.io", "/apis/metrics.k8s.io/v1beta1", "/healthz", "/healthz/ping", "/healthz/poststarthook/generic-apiserver-start-informers", "/metrics", "/version" ]
why there is no /apis/custom.metrics.k8s.io/v1beta1 in prometheus-adapter? I am using the image of https://hub.docker.com/r/directxman12/k8s-prometheus-adapter, if anyone know this problem, please tell me, thanks in advance.
cc @s-urbaniak
thanks Frederic for your help, just update antoher found here. if using "helm install --name my-release stable/prometheus-adapter" to install prometheuse-adapter, it has /apis/custom.metrics.k8s.io/v1beta1, I still don't know what is the difference between kube-promethus and the one installed by helm.
output when using helm: kubectl get -v=8 --raw /api/v1/namespaces/default/services/https:prometheus-adapter:443/proxy/ I1111 20:39:33.287204 17139 loader.go:359] Config loaded from file: /etc/kubernetes/admin.conf I1111 20:39:33.288039 17139 round_trippers.go:416] GET https://10.4.5.117:6443/api/v1/namespaces/default/services/https:prometheus-adapter:443/proxy/ I1111 20:39:33.288063 17139 round_trippers.go:423] Request Headers: I1111 20:39:33.288075 17139 round_trippers.go:426] User-Agent: kubectl/v1.15.3 (linux/arm64) kubernetes/2d3c76f I1111 20:39:33.288088 17139 round_trippers.go:426] Accept: application/json, / I1111 20:39:33.355544 17139 round_trippers.go:441] Response Status: 200 OK in 67 milliseconds I1111 20:39:33.355571 17139 round_trippers.go:444] Response Headers: I1111 20:39:33.355583 17139 round_trippers.go:447] Content-Type: application/json I1111 20:39:33.355594 17139 round_trippers.go:447] Date: Mon, 11 Nov 2019 12:39:33 GMT I1111 20:39:33.355605 17139 round_trippers.go:447] Content-Length: 179 { "paths": [ "/apis", "/apis/custom.metrics.k8s.io", "/apis/custom.metrics.k8s.io/v1beta1", "/healthz", "/healthz/ping", "/metrics", "/version" ] }
I think I find the root cause, after running deploy.sh, the pod prometheus-adapter should be deleted manually in order to be recreated so that new configmap can take effect.
I changed the title to better reflect what needs to be implemented.
@paulfantom - had a quick look at the adapter codebase. I'm not sure if there is a way to hot-reload?
Also worth nothing this comment https://github.com/kubernetes-sigs/prometheus-adapter/pull/46#pullrequestreview-100761876
There is no way to do a hot reload via configmap-reloader. However, we can put hash of the ConfigMap in adapter Deployment annotation and with changes to ConfigMap, Deployment would be changed and adapter recreated.
Example: https://github.com/thaum-xyz/jsonnet-libs/blob/main/apps/homer/homer.libsonnet#L96-L98
Hi @philipgough and @paulfantom, in the services we currently manage at the company I am working on we apply the strategy Paul mentioned in his latest comment. If possible can you assign this to me and I can implement it?
@ritaCanavarro - done, thanks :)
Thanks @philipgough, I have opened the PR: https://github.com/prometheus-operator/kube-prometheus/pull/2195. If you could take a look :)
@philipgough I think we can close this issue, wdyt? :)
Thanks!