metrics-server
metrics-server copied to clipboard
Inconsistent deployment definition using Manifest and Helm Chart
What happened:
Deployment definition using Manifest (components.yaml file from Release page) and Helm Chart with default values.yaml
file (generated using helm template -n kube-system metrics-server . -f values.yaml
) differ in some areas:
- ClusterRole 'system:metrics-server'
- Deployment's 'securityContext' section
- Deployment's '- --secure-port' parameter
- Deployment's 'resources' section
- APIService 'service.port' parameter
Which deployment definition (Manifest vs Helm Chart) is up to date if they differ?
What you expected to happen:
Default deployment definition for Manifest and Helm Chart should be equal - deployment of Metrics Server using Manifest or Helm Chart should give this same result.
components.yaml
file from Release page should be generated using helm template
command (based on default values.yaml
file).
Environment:
- Kubernetes distribution (GKE, EKS, Kubeadm, the hard way, etc.): Openstack Magnum
- Container Network Setup (flannel, calico, etc.): calico
- Kubernetes version (use
kubectl version
): v1.24.13 - Metrics Server manifest: Comparison of
components.yaml
file from Release page and Helm Chart with defaultvalues.yaml
spoiler for Metrics Server Manifes vs Helm Chart diff:
diff --git a/metrics-server-manifest.yaml b/metrics-server-manifest.yaml
index 4b0b79a4..409715ff 100644
--- a/metrics-server-manifest.yaml
+++ b/metrics-server-manifest.yaml
@@ -81,8 +81,6 @@ rules:
resources:
- pods
- nodes
- - namespaces
- - configmaps
verbs:
- get
- list
@@ -158,7 +156,6 @@ metadata:
app.kubernetes.io/version: "0.6.4"
app.kubernetes.io/managed-by: Helm
spec:
- type: ClusterIP
ports:
- name: https
port: 443
@@ -180,11 +177,13 @@ metadata:
app.kubernetes.io/version: "0.6.4"
app.kubernetes.io/managed-by: Helm
spec:
- replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: metrics-server
app.kubernetes.io/instance: metrics-server
+ strategy:
+ rollingUpdate:
+ maxUnavailable: 0
template:
metadata:
labels:
@@ -198,18 +197,13 @@ spec:
- name: metrics-server
securityContext:
allowPrivilegeEscalation: false
- capabilities:
- drop:
- - ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
- seccompProfile:
- type: RuntimeDefault
image: registry.k8s.io/metrics-server/metrics-server:v0.6.4
imagePullPolicy: IfNotPresent
args:
- - --secure-port=10250
+ - --secure-port=4443
- --cert-dir=/tmp
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
@@ -217,14 +211,13 @@ spec:
ports:
- name: https
protocol: TCP
- containerPort: 10250
+ containerPort: 4443
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
- initialDelaySeconds: 0
periodSeconds: 10
readinessProbe:
failureThreshold: 3
@@ -234,6 +227,10 @@ spec:
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
+ resources:
+ requests:
+ cpu: 100m
+ memory: 200Mi
volumeMounts:
- name: tmp
mountPath: /tmp
@@ -241,6 +238,8 @@ spec:
requests:
cpu: 100m
memory: 200Mi
+ nodeSelector:
+ kubernetes.io/os: linux
volumes:
- name: tmp
emptyDir: {}
@@ -262,7 +261,6 @@ spec:
service:
name: metrics-server
namespace: kube-system
- port: 443
version: v1beta1
versionPriority: 100
/kind bug
/triage accepted /assign @stevehipwell
@pawcykca I think the current differences are either expected (as they're Helm artifacts like annotations) or have been caused by the Helm chart being intended for the v0.7.0 version of Metrics Server but being used for v0.6.x. These difference should resolve itself with the v0.7.0 release. You can check this by comparing the manifests and chart on the master
branch.
As the Helm chart is, by design, a separate component to the binary and the manifests are released with the binary I wouldn't expect them to be "in-sync". The manifests are a more conservative single use implementation as they're provided as-is, while the Helm chart covers all uses and is more able to make independent changes (it has a completely separate SemVer) as the end-user can change the behaviour to fit their requirements. In addition to the above the Helm chart may change the resources depending on the Kubernetes version so is dynamic.
Differences such as annotations are obvious, so they were omitted in my comparison.
Due to the fact that Helm Chart and Manifest are provided in scope of the same repository, I would expect compatibility in terms of deployment configuration - such differences as I indicated in this issue make it unclear which deployment method is up to date and should be used.
You can check this by comparing the manifests and chart on the master branch.
I did this comparison on master
branch (helm template result with content of manifest/base directory) and there are still differences in below areas:
- ClusterRole 'system:metrics-server'
- Deployment's 'spec.strategy' section
- Deployment's 'securityContext' section
- Deployment's 'nodeSelector' section
- Deployment's 'livenessProbe' section
- APIService 'service.port' parameter