metrics-server
metrics-server copied to clipboard
Optimization decodeBatch
What this PR does / why we need it:
When a pod start/stop occurs in the cluster, it will not cause data scrape failure of the metrics-server
Currently, when data obtained from the kubelet's /metrics/resource endpoint to be parsed, if parse an entry fails, the whole data scrape will fail. We expect abnormal entries to be skipped, do not affect the data scrape of other nodes/pods
Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #1017
[APPROVALNOTIFIER] This PR is NOT APPROVED
This pull-request has been approved by: yangjunmyfm192085
To complete the pull request process, please assign s-urbaniak after the PR has been reviewed.
You can assign the PR to them by writing /assign @s-urbaniak in a comment when ready.
The full list of commands accepted by this bot can be found here.
Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment
We expect abnormal entries to be skipped, do not affect the data scrape of other nodes/pods
Don't agree, if we find an abnormal entry it means that there was something wrong on network/kubelet side. If so trying to continue parsing can only result in corrupting the current state.
It's better to fail and inform the user than silently fail and try to utilize corrupted data.
We expect abnormal entries to be skipped, do not affect the data scrape of other nodes/pods
Don't agree, if we find an abnormal entry it means that there was something wrong on network/kubelet side. If so trying to continue parsing can only result in corrupting the current state.
It's better to fail and inform the user than silently fail and try to utilize corrupted data.
All right. It seems that we need to solve the situation of negative value reported by timestamp from kubelet
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale - Mark this issue or PR as rotten with
/lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/assign
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten - Close this issue or PR with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closed
You can:
- Reopen this PR with
/reopen - Mark this PR as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
@k8s-triage-robot: Closed this PR.
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages PRs according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the PR is closedYou can:
- Reopen this PR with
/reopen- Mark this PR as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.