prometheus-adapter
prometheus-adapter copied to clipboard
Custom metrics: use the timestamp from sample
For custom metrics, the timestamp is time.Now(); a //TODO was added 4 years ago to use the right timestamp instead:
https://github.com/kubernetes-sigs/prometheus-adapter/blob/89584579687e4a66f352a1ba26e957951de628e7/pkg/custom-provider/provider.go#L101-L102
We should use the timestamp from the sample, as it is done for external metrics and resource metrics.
/kind feature /assign
/triage accepted
@olivierlemasle Hi, I have been working on this issue and submited a pull request #554 for the same. I am a newly introduced developer for this organisation and would need some inputs to contribute for the same. Kindly let me know if there is any guide to read before the contributions or any rules to look for testing the code for the environment.
Thank you and welcome @impact-maker
First, you'll find the Kubernetes Contributing Guidelines in CONTRIBUTING.md. It also includes a guide on how to sign the CLA, which is required for any contribution, and general guidelines (how to write a good commit message, etc.). https://github.com/kubernetes/community/blob/master/contributors/guide/pull-requests.md is also a useful general guide.
From a technical perspective, for prometheus-adapter, you can use make verify test to run the linters / checkers and execute the tests locally, before pushing the code to GitHub.
You can use make container to build a container image of prometheus-adapter and test if everything is ok.
I'll review your PR later today.
@olivierlemasle Thank you for guiding me through this process. I have tried to do a fair bit of research to go through the documentations provided. I have made another commit in the last pull request. Kindly review that and help me cut through the steep learning curve of contributing to this project. I have tried my best to run make verify test command but failed to do so and I am still not able to compile/ test the code before commiting. Your inputs on the same would really help me provide useful code contributions for this issue.
This issue has not been updated in over 1 year, and should be re-triaged.
You can:
- Confirm that this issue is still relevant with
/triage accepted(org members only) - Close this issue with
/close
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/
/remove-triage accepted
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.