kong deploy failed when loading from local priviate hub in China
What happened?
Hello, here I want to deploy kubernetes/dashboard in local. I can't get use "helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard" directly, because related images are not accessible directly from my site. So I prepared related images in local private hub. And adjust values.yml to deploy, but the pod about kong is still failed to run in my k8s. Any suggestions? Below is related settings for your reference:
Deploy way: helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard -f values.yaml
pod of kong warning in k8s:
Name: kubernetes-dashboard-kong-98bbbb69b-trbcd
Namespace: kubernetes-dashboard
Priority: 0
Service Account: kubernetes-dashboard-kong
Node: k8swb/192.168.31.114
Start Time: Fri, 24 Jan 2025 15:00:54 +0800
Labels: app=kubernetes-dashboard-kong
app.kubernetes.io/component=app
app.kubernetes.io/instance=kubernetes-dashboard
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=kong
app.kubernetes.io/version=3.6
helm.sh/chart=kong-2.38.0
pod-template-hash=98bbbb69b
version=3.6
Annotations: cni.projectcalico.org/containerID: 6010c141e95b01291c0144ed72c2455ccfe9f873915edd7ed5a35dd8c212a595
cni.projectcalico.org/podIP: 192.2.239.146/32
cni.projectcalico.org/podIPs: 192.2.239.146/32
kuma.io/gateway: enabled
kuma.io/service-account-token-volume: kubernetes-dashboard-kong-token
traffic.sidecar.istio.io/includeInboundPorts:
Status: Running
IP: 192.2.239.146
IPs:
IP: 192.2.239.146
Controlled By: ReplicaSet/kubernetes-dashboard-kong-98bbbb69b
Init Containers:
clear-stale-pid:
Container ID: containerd://e1d37e9ba084cd4d0fde15493f20a250e9964adf393d76c63e41afe7e047b838
Image: k8sma:5000/kong:3.6
Image ID: k8sma:5000/kong@sha256:ec2910c74bc16d05d5dcd2fdde6ff366797cb08d64be55dfa94d9eb1220c8a3e
Port:
Normal Pulled 14d (x42 over 14d) kubelet Container image "k8sma:5000/kong:3.6" already present on machine Warning BackOff 13d (x1170 over 14d) kubelet Back-off restarting failed container proxy in pod kubernetes-dashboard-kong-98bbbb69b-trbcd_kubernetes-dashboard(c60daf2a-f90d-4c0b-8ec9-9c4767a19213) Normal SandboxChanged 19m (x2 over 20m) kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulled 19m kubelet Container image "k8sma:5000/kong:3.6" already present on machine Normal Created 19m kubelet Created container clear-stale-pid Normal Started 19m kubelet Started container clear-stale-pid Normal Pulled 18m (x3 over 19m) kubelet Container image "k8sma:5000/kong:3.6" already present on machine Normal Created 18m (x3 over 19m) kubelet Created container proxy Normal Started 18m (x3 over 19m) kubelet Started container proxy Warning BackOff 1s (x103 over 19m) kubelet Back-off restarting failed container proxy in pod kubernetes-dashboard-kong-98bbbb69b-trbcd_kubernetes-dashboard(c60daf2a-f90d-4c0b-8ec9-9c4767a19213)
private hub docker images:
kubernetesui/dashboard-web 1.6.0 96b21277cbef 5 months ago 188MB k8sma:5000/kubernetesui/dashboard-web 1.6.0 96b21277cbef 5 months ago 188MB kubernetesui/dashboard-api 1.10.1 aa69cebab7a8 5 months ago 54.6MB k8sma:5000/kubernetesui/dashboard-api 1.10.1 aa69cebab7a8 5 months ago 54.6MB k8sma:5000/kubernetesui/dashboard-auth 1.2.2 45a495c0887d 5 months ago 48MB kubernetesui/dashboard-auth 1.2.2 45a495c0887d 5 months ago 48MB kubernetesui/dashboard-metrics-scraper 1.2.1 46e3f823d18f 5 months ago 38.2MB k8sma:5000/kubernetesui/dashboard-metrics-scraper 1.2.1 46e3f823d18f 5 months ago 38.2MB kong 3.6 6e99fd0ebd1e 10 months ago 297MB k8sma:5000/kong 3.6 6e99fd0ebd1e 10 months ago 297MB k8sma:5000/library/kong 3.6 6e99fd0ebd1e 10 months ago 297MB
What did you expect to happen?
kong should be deployed without warning in my k8s
How can we reproduce it (as minimally and precisely as possible)?
NAME READY STATUS RESTARTS AGE kubernetes-dashboard-api-9b8464959-2g5kf 1/1 Running 22 (30m ago) 99d kubernetes-dashboard-auth-657444bc9f-d7d6s 1/1 Running 22 (30m ago) 99d kubernetes-dashboard-kong-98bbbb69b-trbcd 0/1 CrashLoopBackOff 2217 (3m43s ago) 99d kubernetes-dashboard-metrics-scraper-74bfb95c9b-5l6wm 1/1 Running 22 (30m ago) 99d kubernetes-dashboard-web-7b469dc74c-2494l 1/1 Running 22 (30m ago) 99d
Anything else we need to know?
No response
What browsers are you seeing the problem on?
No response
Kubernetes Dashboard version
1.6.0
Kubernetes version
Client Version: v1.30.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.30.8
Dev environment
No response
My customized values.yaml to deploy kubernetes/dashboard, only adjust "image:" to switch to local private hub.
Copyright 2017 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
General configuration shared across resources
app:
Mode determines if chart should deploy a full Dashboard with all containers or just the API.
- dashboard - deploys all the containers
- api - deploys just the API
mode: 'dashboard' image: pullPolicy: IfNotPresent pullSecrets: [] scheduling: # Node labels for pod assignment # Ref: https://kubernetes.io/docs/user-guide/node-selection/ nodeSelector: {} security: # Allow overriding csrfKey used by API/Auth containers. # It has to be base64 encoded random 256 bytes string. # If empty, it will be autogenerated. csrfKey: ~ # SecurityContext to be added to pods # To disable set the following configuration to null: # securityContext: null securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault # ContainerSecurityContext to be added to containers # To disable set the following configuration to null: # containerSecurityContext: null containerSecurityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 capabilities: drop: ["ALL"] # Pod Disruption Budget configuration # Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ podDisruptionBudget: enabled: false minAvailable: 0 maxUnavailable: 0 networkPolicy: enabled: false ingressDenyAll: false # Raw network policy spec that overrides predefined spec # Example: # spec: # egress: # - ports: # - port: 123 spec: {}
Common labels & annotations shared across all deployed resources
labels: {} annotations: {}
Common priority class used for all deployed resources
priorityClassName: null settings: ## Global dashboard settings global: # # Cluster name that appears in the browser window title if it is set # clusterName: "" # # Max number of items that can be displayed on each list page # itemsPerPage: 10 # # Max number of labels that are displayed by default on most views. # labelsLimit: 3 # # Number of seconds between every auto-refresh of logs # logsAutoRefreshTimeInterval: 5 # # Number of seconds between every auto-refresh of every resource. Set 0 to disable # resourceAutoRefreshTimeInterval: 10 # # Hide all access denied warnings in the notification panel # disableAccessDeniedNotifications: false # # Hide all namespaces option in namespace selection dropdown to avoid accidental selection in large clusters thus preventing OOM errors # hideAllNamespaces: false # # Namespace that should be selected by default after logging in. # defaultNamespace: default # # List of namespaces that should be presented to user without namespace list privileges. # namespaceFallbackList: # - default ## Pinned resources that will be displayed in dashboard's menu pinnedResources: [] # - kind: customresourcedefinition # # Fully qualified name of a CRD # name: prometheus.monitoring.coreos.com # # Display name # displayName: Prometheus # # Is this CRD namespaced? # namespaced: true ingress: enabled: false hosts: # Keep 'localhost' host only if you want to access Dashboard using 'kubectl port-forward ...' on: # https://localhost:8443 - localhost # - kubernetes.dashboard.domain.com ingressClassName: internal-nginx # Use only if your ingress controllers support default ingress classes. # If set to true ingressClassName will be ignored and not added to the Ingress resources. # It should fall back to using IngressClass marked as the default. useDefaultIngressClass: false # This will append our Ingress with annotations required by our default configuration. # nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" # nginx.ingress.kubernetes.io/ssl-passthrough: "true" # nginx.ingress.kubernetes.io/ssl-redirect: "true" useDefaultAnnotations: true pathType: ImplementationSpecific # If path is not the default (/), rewrite-target annotation will be added to the Ingress. # It allows serving Kubernetes Dashboard on a sub-path. Make sure that the configured path # does not conflict with gateway route configuration. path: / issuer: name: selfsigned # Scope determines what kind of issuer annotation will be used on ingress resource # - default - adds 'cert-manager.io/issuer' # - cluster - adds 'cert-manager.io/cluster-issuer' # - disabled - disables cert-manager annotations scope: default tls: enabled: true # If provided it will override autogenerated secret name secretName: "" labels: {} annotations: {}
Use the following toleration if Dashboard can be deployed on a tainted control-plane nodes
key: node-role.kubernetes.io/control-plane
effect: NoSchedule
tolerations: [] affinity: {}
auth: role: auth image: repository: k8sma:5000/kubernetesui/dashboard-auth tag: 1.2.2 scaling: replicas: 1 revisionHistoryLimit: 10 containers: ports: - name: auth containerPort: 8000 protocol: TCP args: [] env: [] volumeMounts: - mountPath: /tmp name: tmp-volume # TODO: Validate configuration resources: requests: cpu: 100m memory: 200Mi limits: cpu: 250m memory: 400Mi automountServiceAccountToken: true volumes: # Create on-disk volume to store exec logs (required) - name: tmp-volume emptyDir: {} nodeSelector: {}
Labels & annotations for Auth related resources
labels: {} annotations: {} serviceLabels: {} serviceAnnotations: {}
API deployment configuration
api: role: api image: repository: k8sma:5000/kubernetesui/dashboard-api tag: 1.10.1 scaling: replicas: 1 revisionHistoryLimit: 10 containers: ports: - name: api containerPort: 8000 protocol: TCP # Additional container arguments # Full list of arguments: https://github.com/kubernetes/dashboard/blob/master/docs/common/arguments.md # args: # - --system-banner="Welcome to the Kubernetes Dashboard" args: [] # Additional container environment variables # env: # - name: SOME_VAR # value: 'some value' env: [] # Additional volume mounts # - mountPath: /kubeconfig # name: dashboard-kubeconfig # readOnly: true volumeMounts: # Create volume mount to store exec logs (required) - mountPath: /tmp name: tmp-volume # TODO: Validate configuration resources: requests: cpu: 100m memory: 200Mi limits: cpu: 250m memory: 400Mi automountServiceAccountToken: true
Additional volumes
- name: dashboard-kubeconfig
secret:
defaultMode: 420
secretName: dashboard-kubeconfig
volumes: # Create on-disk volume to store exec logs (required) - name: tmp-volume emptyDir: {} nodeSelector: {}
Labels & annotations for API related resources
labels: {} annotations: {} serviceLabels: {} serviceAnnotations: {}
WEB UI deployment configuration
web: role: web image: repository: k8sma:5000/kubernetesui/dashboard-web tag: 1.6.0 scaling: replicas: 1 revisionHistoryLimit: 10 containers: ports: - name: web containerPort: 8000 protocol: TCP # Additional container arguments # Full list of arguments: https://github.com/kubernetes/dashboard/blob/master/docs/common/arguments.md # args: # - --system-banner="Welcome to the Kubernetes Dashboard" args: [] # Additional container environment variables # env: # - name: SOME_VAR # value: 'some value' env: [] # Additional volume mounts # - mountPath: /kubeconfig # name: dashboard-kubeconfig # readOnly: true volumeMounts: # Create volume mount to store logs (required) - mountPath: /tmp name: tmp-volume # TODO: Validate configuration resources: requests: cpu: 100m memory: 200Mi limits: cpu: 250m memory: 400Mi automountServiceAccountToken: true
Additional volumes
- name: dashboard-kubeconfig
secret:
defaultMode: 420
secretName: dashboard-kubeconfig
volumes: # Create on-disk volume to store exec logs (required) - name: tmp-volume emptyDir: {} nodeSelector: {}
Labels & annotations for WEB UI related resources
labels: {} annotations: {} serviceLabels: {} serviceAnnotations: {}
Metrics Scraper
Container to scrape, store, and retrieve a window of time from the Metrics Server.
refs: https://github.com/kubernetes/dashboard/tree/master/modules/metrics-scraper
metricsScraper: enabled: true role: metrics-scraper image: repository: k8sma:5000/kubernetesui/dashboard-metrics-scraper tag: 1.2.1 scaling: replicas: 1 revisionHistoryLimit: 10 containers: ports: - containerPort: 8000 protocol: TCP args: [] # Additional container environment variables # env: # - name: SOME_VAR # value: 'some value' env: [] # Additional volume mounts # - mountPath: /kubeconfig # name: dashboard-kubeconfig # readOnly: true volumeMounts: # Create volume mount to store logs (required) - mountPath: /tmp name: tmp-volume # TODO: Validate configuration resources: requests: cpu: 100m memory: 200Mi limits: cpu: 250m memory: 400Mi livenessProbe: httpGet: scheme: HTTP path: / port: 8000 initialDelaySeconds: 30 timeoutSeconds: 30 automountServiceAccountToken: true
Additional volumes
- name: dashboard-kubeconfig
secret:
defaultMode: 420
secretName: dashboard-kubeconfig
volumes: - name: tmp-volume emptyDir: {} nodeSelector: {}
Labels & annotations for Metrics Scraper related resources
labels: {} annotations: {} serviceLabels: {} serviceAnnotations: {}
Optional Metrics Server sub-chart configuration
Enable this if you don't already have metrics-server enabled on your cluster and
want to use it with dashboard metrics-scraper
refs:
- https://github.com/kubernetes-sigs/metrics-server
- https://github.com/kubernetes-sigs/metrics-server/tree/master/charts/metrics-server
metrics-server: enabled: false args: - --kubelet-preferred-address-types=InternalIP - --kubelet-insecure-tls
Required Kong sub-chart with DBless configuration to act as a gateway
for our all containers.
kong: enabled: true
Configuration reference: https://docs.konghq.com/gateway/3.6.x/reference/configuration
env:
dns_order: LAST,A,CNAME,AAAA,SRV
plugins: 'off'
nginx_worker_processes: 1
ingressController:
enabled: false
image: repository: k8sma:5000/kong tag: "3.6"
manager:
enabled: false
dblessConfig:
configMap: kong-dbless-config
proxy:
type: ClusterIP
http:
enabled: false
Optional Cert Manager sub-chart configuration
Enable this if you don't already have cert-manager enabled on your cluster.
cert-manager: enabled: false installCRDs: true
Optional Nginx Ingress sub-chart configuration
Enable this if you don't already have nginx-ingress enabled on your cluster.
nginx: enabled: false controller: electionID: ingress-controller-leader ingressClassResource: name: internal-nginx default: false controllerValue: k8s.io/internal-ingress-nginx service: type: ClusterIP
Extra configurations:
- manifests
- predefined roles
- prometheus
- etc...
extras:
Extra Kubernetes manifests to be deployed
manifests:
- apiVersion: v1
kind: ConfigMap
metadata:
name: additional-configmap
data:
mykey: myvalue
manifests: [] serviceMonitor: # Whether to create a Prometheus Operator service monitor. enabled: false # Here labels can be added to the serviceMonitor labels: {} # Here annotations can be added to the serviceMonitor annotations: {} # metrics.serviceMonitor.metricRelabelings Specify Metric Relabelings to add to the scrape endpoint # ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#relabelconfig metricRelabelings: [] # metrics.serviceMonitor.relabelings [array] Prometheus relabeling rules relabelings: [] # ServiceMonitor connection scheme. Defaults to HTTPS. scheme: https # ServiceMonitor connection tlsConfig. Defaults to {insecureSkipVerify:true}. tlsConfig: insecureSkipVerify: true
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
I have the same issue using this repository inside a LAN with no internet access. I mirrored the helm charts and docker repositories on my local artifactory, and then updated the image overrides, but there is no override for kong :(
Warning Failed 40m kubelet Failed to pull image "kong:3.8": failed to pull and unpack image "docker.io/library/kong:3.8": failed to copy: httpReadSee
ker: failed open: failed to do request: Get "https://docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com/registry-v2/docker/registry/v2/blobs/sha256/bc/bc228145b5315
a9c4e5c9894c767827433eb9517b7a763118274c74247fd4163/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=f1baa2dd9b876aeb89efebbfc9e5d5f4%2F20250721%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=2
0250721T235022Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=74beb83a218a9d2f0d9f95ca41dcbbc7ec23f662c3256a1c7043fa8ac7ffa402": read tcp 10.x.x.x:57142->162.159.141.50:44
3: read: connection reset by peer
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten