prometheus-adapter
prometheus-adapter copied to clipboard
Update deployment manifests and instructions
Deployment manifests and instructions are very outdated in the project since the community has been solely updating the kube-prometheus manifests. It would be great to leverage the efforts that were made there and update the manifests that we have in this repository.
@dgrisonnet, can you help me with getting started what i need to do? thanks
Sure, so the goal of this issue is to update the deployment manifests and instructions that we have in this repository under the deploy directory based on the more up-to-date manifests in kube-prometheus that are prefixed by prometheus-adapter- in the manifests directory.
The few pointers that I could give you are that you should be able to update all the manifests and bring the new ones except for the configmap and the apiservices that we can keep as is, except for a few selectors that might need to be updated to match the new changes.
Also, it would be great to move away from the custom-metrics-apiserver resource name to prometheus-adapter which is used in kube-prometheus and is in my opinion more clear.
As for the deployment instructions, we could move the second point out of the default deployment and add a new section explaining how users can configure the adapter and enable HTTPS.
Let me know if you need any additional information and feel free to reach out to me on slack if you need any help :slightly_smiling_face:
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
@championshuttler are you still looking into this?
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
/remove-lifecycle rotten
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
Hello @dgrisonnet I would like to help with this! I have only one question, can I make it so the manifests are generated by jsonnet, or do you simply want me to manually take the manifests from kube-prometheus?
Hello @JoaoBraveCoding my original idea was to first update the yaml manifest to match the changes made in kube-prometheus and then migrate to jsonnet: https://github.com/kubernetes-sigs/prometheus-adapter/issues/427.
That said, if you prefer to go straight to the migration to jsonnet, I am fine with it.
/remove-lifecycle stale /unassign championshuttler
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
This was fixed by https://github.com/kubernetes-sigs/prometheus-adapter/pull/531.