dashboard icon indicating copy to clipboard operation
dashboard copied to clipboard

kong deploy failed when loading from local priviate hub in China

Open huanghaiqing1 opened this issue 8 months ago • 6 comments

What happened?

Hello, here I want to deploy kubernetes/dashboard in local. I can't get use "helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard" directly, because related images are not accessible directly from my site. So I prepared related images in local private hub. And adjust values.yml to deploy, but the pod about kong is still failed to run in my k8s. Any suggestions? Below is related settings for your reference:

Deploy way: helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard --create-namespace --namespace kubernetes-dashboard -f values.yaml

pod of kong warning in k8s:

Name: kubernetes-dashboard-kong-98bbbb69b-trbcd Namespace: kubernetes-dashboard Priority: 0 Service Account: kubernetes-dashboard-kong Node: k8swb/192.168.31.114 Start Time: Fri, 24 Jan 2025 15:00:54 +0800 Labels: app=kubernetes-dashboard-kong app.kubernetes.io/component=app app.kubernetes.io/instance=kubernetes-dashboard app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=kong app.kubernetes.io/version=3.6 helm.sh/chart=kong-2.38.0 pod-template-hash=98bbbb69b version=3.6 Annotations: cni.projectcalico.org/containerID: 6010c141e95b01291c0144ed72c2455ccfe9f873915edd7ed5a35dd8c212a595 cni.projectcalico.org/podIP: 192.2.239.146/32 cni.projectcalico.org/podIPs: 192.2.239.146/32 kuma.io/gateway: enabled kuma.io/service-account-token-volume: kubernetes-dashboard-kong-token traffic.sidecar.istio.io/includeInboundPorts: Status: Running IP: 192.2.239.146 IPs: IP: 192.2.239.146 Controlled By: ReplicaSet/kubernetes-dashboard-kong-98bbbb69b Init Containers: clear-stale-pid: Container ID: containerd://e1d37e9ba084cd4d0fde15493f20a250e9964adf393d76c63e41afe7e047b838 Image: k8sma:5000/kong:3.6 Image ID: k8sma:5000/kong@sha256:ec2910c74bc16d05d5dcd2fdde6ff366797cb08d64be55dfa94d9eb1220c8a3e Port: Host Port: SeccompProfile: RuntimeDefault Command: rm -vrf $KONG_PREFIX/pids State: Terminated Reason: Completed Exit Code: 0 Started: Thu, 17 Apr 2025 15:56:49 +0800 Finished: Thu, 17 Apr 2025 15:56:49 +0800 Ready: True Restart Count: 22 Environment: KONG_ADMIN_ACCESS_LOG: /dev/stdout KONG_ADMIN_ERROR_LOG: /dev/stderr KONG_ADMIN_GUI_ACCESS_LOG: /dev/stdout KONG_ADMIN_GUI_ERROR_LOG: /dev/stderr KONG_ADMIN_LISTEN: 127.0.0.1:8444 http2 ssl, [::1]:8444 http2 ssl KONG_CLUSTER_LISTEN: off KONG_DATABASE: off KONG_DECLARATIVE_CONFIG: /kong_dbless/kong.yml KONG_DNS_ORDER: LAST,A,CNAME,AAAA,SRV KONG_LUA_PACKAGE_PATH: /opt/?.lua;/opt/?/init.lua;; KONG_NGINX_WORKER_PROCESSES: 1 KONG_PLUGINS: off KONG_PORTAL_API_ACCESS_LOG: /dev/stdout KONG_PORTAL_API_ERROR_LOG: /dev/stderr KONG_PORT_MAPS: 443:8443 KONG_PREFIX: /kong_prefix/ KONG_PROXY_ACCESS_LOG: /dev/stdout KONG_PROXY_ERROR_LOG: /dev/stderr KONG_PROXY_LISTEN: 0.0.0.0:8443 http2 ssl, [::]:8443 http2 ssl KONG_PROXY_STREAM_ACCESS_LOG: /dev/stdout basic KONG_PROXY_STREAM_ERROR_LOG: /dev/stderr KONG_ROUTER_FLAVOR: traditional KONG_STATUS_ACCESS_LOG: off KONG_STATUS_ERROR_LOG: /dev/stderr KONG_STATUS_LISTEN: 0.0.0.0:8100, [::]:8100 KONG_STREAM_LISTEN: off Mounts: /kong_dbless/ from kong-custom-dbless-config-volume (rw) /kong_prefix/ from kubernetes-dashboard-kong-prefix-dir (rw) /tmp from kubernetes-dashboard-kong-tmp (rw) Containers: proxy: Container ID: containerd://9eeef46129c112835de7a1463c1bfba226984192c67cc9cb4076d0d357ebc085 Image: k8sma:5000/kong:3.6 Image ID: k8sma:5000/kong@sha256:ec2910c74bc16d05d5dcd2fdde6ff366797cb08d64be55dfa94d9eb1220c8a3e Ports: 8443/TCP, 8100/TCP Host Ports: 0/TCP, 0/TCP SeccompProfile: RuntimeDefault State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: Error Exit Code: 1 Started: Thu, 17 Apr 2025 16:13:02 +0800 Finished: Thu, 17 Apr 2025 16:13:03 +0800 Ready: False Restart Count: 2215 Liveness: http-get http://:status/status delay=5s timeout=5s period=10s #success=1 #failure=3 Readiness: http-get http://:status/status/ready delay=5s timeout=5s period=10s #success=1 #failure=3 Environment: KONG_ADMIN_ACCESS_LOG: /dev/stdout KONG_ADMIN_ERROR_LOG: /dev/stderr KONG_ADMIN_GUI_ACCESS_LOG: /dev/stdout KONG_ADMIN_GUI_ERROR_LOG: /dev/stderr KONG_ADMIN_LISTEN: 127.0.0.1:8444 http2 ssl, [::1]:8444 http2 ssl KONG_CLUSTER_LISTEN: off KONG_DATABASE: off KONG_DECLARATIVE_CONFIG: /kong_dbless/kong.yml KONG_DNS_ORDER: LAST,A,CNAME,AAAA,SRV KONG_LUA_PACKAGE_PATH: /opt/?.lua;/opt/?/init.lua;; KONG_NGINX_WORKER_PROCESSES: 1 KONG_PLUGINS: off KONG_PORTAL_API_ACCESS_LOG: /dev/stdout KONG_PORTAL_API_ERROR_LOG: /dev/stderr KONG_PORT_MAPS: 443:8443 KONG_PREFIX: /kong_prefix/ KONG_PROXY_ACCESS_LOG: /dev/stdout KONG_PROXY_ERROR_LOG: /dev/stderr KONG_PROXY_LISTEN: 0.0.0.0:8443 http2 ssl, [::]:8443 http2 ssl KONG_PROXY_STREAM_ACCESS_LOG: /dev/stdout basic KONG_PROXY_STREAM_ERROR_LOG: /dev/stderr KONG_ROUTER_FLAVOR: traditional KONG_STATUS_ACCESS_LOG: off KONG_STATUS_ERROR_LOG: /dev/stderr KONG_STATUS_LISTEN: 0.0.0.0:8100, [::]:8100 KONG_STREAM_LISTEN: off KONG_NGINX_DAEMON: off Mounts: /kong_dbless/ from kong-custom-dbless-config-volume (rw) /kong_prefix/ from kubernetes-dashboard-kong-prefix-dir (rw) /tmp from kubernetes-dashboard-kong-tmp (rw) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready False ContainersReady False PodScheduled True Volumes: kubernetes-dashboard-kong-prefix-dir: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: 256Mi kubernetes-dashboard-kong-tmp: Type: EmptyDir (a temporary directory that shares a pod's lifetime) Medium: SizeLimit: 1Gi kubernetes-dashboard-kong-token: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: DownwardAPI: true kong-custom-dbless-config-volume: Type: ConfigMap (a volume populated by a ConfigMap) Name: kong-dbless-config Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message


Normal Pulled 14d (x42 over 14d) kubelet Container image "k8sma:5000/kong:3.6" already present on machine Warning BackOff 13d (x1170 over 14d) kubelet Back-off restarting failed container proxy in pod kubernetes-dashboard-kong-98bbbb69b-trbcd_kubernetes-dashboard(c60daf2a-f90d-4c0b-8ec9-9c4767a19213) Normal SandboxChanged 19m (x2 over 20m) kubelet Pod sandbox changed, it will be killed and re-created. Normal Pulled 19m kubelet Container image "k8sma:5000/kong:3.6" already present on machine Normal Created 19m kubelet Created container clear-stale-pid Normal Started 19m kubelet Started container clear-stale-pid Normal Pulled 18m (x3 over 19m) kubelet Container image "k8sma:5000/kong:3.6" already present on machine Normal Created 18m (x3 over 19m) kubelet Created container proxy Normal Started 18m (x3 over 19m) kubelet Started container proxy Warning BackOff 1s (x103 over 19m) kubelet Back-off restarting failed container proxy in pod kubernetes-dashboard-kong-98bbbb69b-trbcd_kubernetes-dashboard(c60daf2a-f90d-4c0b-8ec9-9c4767a19213)

private hub docker images:

kubernetesui/dashboard-web 1.6.0 96b21277cbef 5 months ago 188MB k8sma:5000/kubernetesui/dashboard-web 1.6.0 96b21277cbef 5 months ago 188MB kubernetesui/dashboard-api 1.10.1 aa69cebab7a8 5 months ago 54.6MB k8sma:5000/kubernetesui/dashboard-api 1.10.1 aa69cebab7a8 5 months ago 54.6MB k8sma:5000/kubernetesui/dashboard-auth 1.2.2 45a495c0887d 5 months ago 48MB kubernetesui/dashboard-auth 1.2.2 45a495c0887d 5 months ago 48MB kubernetesui/dashboard-metrics-scraper 1.2.1 46e3f823d18f 5 months ago 38.2MB k8sma:5000/kubernetesui/dashboard-metrics-scraper 1.2.1 46e3f823d18f 5 months ago 38.2MB kong 3.6 6e99fd0ebd1e 10 months ago 297MB k8sma:5000/kong 3.6 6e99fd0ebd1e 10 months ago 297MB k8sma:5000/library/kong 3.6 6e99fd0ebd1e 10 months ago 297MB

What did you expect to happen?

kong should be deployed without warning in my k8s

How can we reproduce it (as minimally and precisely as possible)?

NAME READY STATUS RESTARTS AGE kubernetes-dashboard-api-9b8464959-2g5kf 1/1 Running 22 (30m ago) 99d kubernetes-dashboard-auth-657444bc9f-d7d6s 1/1 Running 22 (30m ago) 99d kubernetes-dashboard-kong-98bbbb69b-trbcd 0/1 CrashLoopBackOff 2217 (3m43s ago) 99d kubernetes-dashboard-metrics-scraper-74bfb95c9b-5l6wm 1/1 Running 22 (30m ago) 99d kubernetes-dashboard-web-7b469dc74c-2494l 1/1 Running 22 (30m ago) 99d

Anything else we need to know?

No response

What browsers are you seeing the problem on?

No response

Kubernetes Dashboard version

1.6.0

Kubernetes version

Client Version: v1.30.2 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.30.8

Dev environment

No response

huanghaiqing1 avatar Apr 17 '25 08:04 huanghaiqing1

My customized values.yaml to deploy kubernetes/dashboard, only adjust "image:" to switch to local private hub.

Copyright 2017 The Kubernetes Authors.

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License.

General configuration shared across resources

app:

Mode determines if chart should deploy a full Dashboard with all containers or just the API.

- dashboard - deploys all the containers

- api - deploys just the API

mode: 'dashboard' image: pullPolicy: IfNotPresent pullSecrets: [] scheduling: # Node labels for pod assignment # Ref: https://kubernetes.io/docs/user-guide/node-selection/ nodeSelector: {} security: # Allow overriding csrfKey used by API/Auth containers. # It has to be base64 encoded random 256 bytes string. # If empty, it will be autogenerated. csrfKey: ~ # SecurityContext to be added to pods # To disable set the following configuration to null: # securityContext: null securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault # ContainerSecurityContext to be added to containers # To disable set the following configuration to null: # containerSecurityContext: null containerSecurityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 1001 runAsGroup: 2001 capabilities: drop: ["ALL"] # Pod Disruption Budget configuration # Ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ podDisruptionBudget: enabled: false minAvailable: 0 maxUnavailable: 0 networkPolicy: enabled: false ingressDenyAll: false # Raw network policy spec that overrides predefined spec # Example: # spec: # egress: # - ports: # - port: 123 spec: {}

Common labels & annotations shared across all deployed resources

labels: {} annotations: {}

Common priority class used for all deployed resources

priorityClassName: null settings: ## Global dashboard settings global: # # Cluster name that appears in the browser window title if it is set # clusterName: "" # # Max number of items that can be displayed on each list page # itemsPerPage: 10 # # Max number of labels that are displayed by default on most views. # labelsLimit: 3 # # Number of seconds between every auto-refresh of logs # logsAutoRefreshTimeInterval: 5 # # Number of seconds between every auto-refresh of every resource. Set 0 to disable # resourceAutoRefreshTimeInterval: 10 # # Hide all access denied warnings in the notification panel # disableAccessDeniedNotifications: false # # Hide all namespaces option in namespace selection dropdown to avoid accidental selection in large clusters thus preventing OOM errors # hideAllNamespaces: false # # Namespace that should be selected by default after logging in. # defaultNamespace: default # # List of namespaces that should be presented to user without namespace list privileges. # namespaceFallbackList: # - default ## Pinned resources that will be displayed in dashboard's menu pinnedResources: [] # - kind: customresourcedefinition # # Fully qualified name of a CRD # name: prometheus.monitoring.coreos.com # # Display name # displayName: Prometheus # # Is this CRD namespaced? # namespaced: true ingress: enabled: false hosts: # Keep 'localhost' host only if you want to access Dashboard using 'kubectl port-forward ...' on: # https://localhost:8443 - localhost # - kubernetes.dashboard.domain.com ingressClassName: internal-nginx # Use only if your ingress controllers support default ingress classes. # If set to true ingressClassName will be ignored and not added to the Ingress resources. # It should fall back to using IngressClass marked as the default. useDefaultIngressClass: false # This will append our Ingress with annotations required by our default configuration. # nginx.ingress.kubernetes.io/backend-protocol: "HTTPS" # nginx.ingress.kubernetes.io/ssl-passthrough: "true" # nginx.ingress.kubernetes.io/ssl-redirect: "true" useDefaultAnnotations: true pathType: ImplementationSpecific # If path is not the default (/), rewrite-target annotation will be added to the Ingress. # It allows serving Kubernetes Dashboard on a sub-path. Make sure that the configured path # does not conflict with gateway route configuration. path: / issuer: name: selfsigned # Scope determines what kind of issuer annotation will be used on ingress resource # - default - adds 'cert-manager.io/issuer' # - cluster - adds 'cert-manager.io/cluster-issuer' # - disabled - disables cert-manager annotations scope: default tls: enabled: true # If provided it will override autogenerated secret name secretName: "" labels: {} annotations: {}

Use the following toleration if Dashboard can be deployed on a tainted control-plane nodes

key: node-role.kubernetes.io/control-plane
effect: NoSchedule

tolerations: [] affinity: {}

auth: role: auth image: repository: k8sma:5000/kubernetesui/dashboard-auth tag: 1.2.2 scaling: replicas: 1 revisionHistoryLimit: 10 containers: ports: - name: auth containerPort: 8000 protocol: TCP args: [] env: [] volumeMounts: - mountPath: /tmp name: tmp-volume # TODO: Validate configuration resources: requests: cpu: 100m memory: 200Mi limits: cpu: 250m memory: 400Mi automountServiceAccountToken: true volumes: # Create on-disk volume to store exec logs (required) - name: tmp-volume emptyDir: {} nodeSelector: {}

Labels & annotations for Auth related resources

labels: {} annotations: {} serviceLabels: {} serviceAnnotations: {}

API deployment configuration

api: role: api image: repository: k8sma:5000/kubernetesui/dashboard-api tag: 1.10.1 scaling: replicas: 1 revisionHistoryLimit: 10 containers: ports: - name: api containerPort: 8000 protocol: TCP # Additional container arguments # Full list of arguments: https://github.com/kubernetes/dashboard/blob/master/docs/common/arguments.md # args: # - --system-banner="Welcome to the Kubernetes Dashboard" args: [] # Additional container environment variables # env: # - name: SOME_VAR # value: 'some value' env: [] # Additional volume mounts # - mountPath: /kubeconfig # name: dashboard-kubeconfig # readOnly: true volumeMounts: # Create volume mount to store exec logs (required) - mountPath: /tmp name: tmp-volume # TODO: Validate configuration resources: requests: cpu: 100m memory: 200Mi limits: cpu: 250m memory: 400Mi automountServiceAccountToken: true

Additional volumes

- name: dashboard-kubeconfig

secret:

defaultMode: 420

secretName: dashboard-kubeconfig

volumes: # Create on-disk volume to store exec logs (required) - name: tmp-volume emptyDir: {} nodeSelector: {}

Labels & annotations for API related resources

labels: {} annotations: {} serviceLabels: {} serviceAnnotations: {}

WEB UI deployment configuration

web: role: web image: repository: k8sma:5000/kubernetesui/dashboard-web tag: 1.6.0 scaling: replicas: 1 revisionHistoryLimit: 10 containers: ports: - name: web containerPort: 8000 protocol: TCP # Additional container arguments # Full list of arguments: https://github.com/kubernetes/dashboard/blob/master/docs/common/arguments.md # args: # - --system-banner="Welcome to the Kubernetes Dashboard" args: [] # Additional container environment variables # env: # - name: SOME_VAR # value: 'some value' env: [] # Additional volume mounts # - mountPath: /kubeconfig # name: dashboard-kubeconfig # readOnly: true volumeMounts: # Create volume mount to store logs (required) - mountPath: /tmp name: tmp-volume # TODO: Validate configuration resources: requests: cpu: 100m memory: 200Mi limits: cpu: 250m memory: 400Mi automountServiceAccountToken: true

Additional volumes

- name: dashboard-kubeconfig

secret:

defaultMode: 420

secretName: dashboard-kubeconfig

volumes: # Create on-disk volume to store exec logs (required) - name: tmp-volume emptyDir: {} nodeSelector: {}

Labels & annotations for WEB UI related resources

labels: {} annotations: {} serviceLabels: {} serviceAnnotations: {}

Metrics Scraper

Container to scrape, store, and retrieve a window of time from the Metrics Server.

refs: https://github.com/kubernetes/dashboard/tree/master/modules/metrics-scraper

metricsScraper: enabled: true role: metrics-scraper image: repository: k8sma:5000/kubernetesui/dashboard-metrics-scraper tag: 1.2.1 scaling: replicas: 1 revisionHistoryLimit: 10 containers: ports: - containerPort: 8000 protocol: TCP args: [] # Additional container environment variables # env: # - name: SOME_VAR # value: 'some value' env: [] # Additional volume mounts # - mountPath: /kubeconfig # name: dashboard-kubeconfig # readOnly: true volumeMounts: # Create volume mount to store logs (required) - mountPath: /tmp name: tmp-volume # TODO: Validate configuration resources: requests: cpu: 100m memory: 200Mi limits: cpu: 250m memory: 400Mi livenessProbe: httpGet: scheme: HTTP path: / port: 8000 initialDelaySeconds: 30 timeoutSeconds: 30 automountServiceAccountToken: true

Additional volumes

- name: dashboard-kubeconfig

secret:

defaultMode: 420

secretName: dashboard-kubeconfig

volumes: - name: tmp-volume emptyDir: {} nodeSelector: {}

Labels & annotations for Metrics Scraper related resources

labels: {} annotations: {} serviceLabels: {} serviceAnnotations: {}

Optional Metrics Server sub-chart configuration

Enable this if you don't already have metrics-server enabled on your cluster and

want to use it with dashboard metrics-scraper

refs:

- https://github.com/kubernetes-sigs/metrics-server

- https://github.com/kubernetes-sigs/metrics-server/tree/master/charts/metrics-server

metrics-server: enabled: false args: - --kubelet-preferred-address-types=InternalIP - --kubelet-insecure-tls

Required Kong sub-chart with DBless configuration to act as a gateway

for our all containers.

kong: enabled: true

Configuration reference: https://docs.konghq.com/gateway/3.6.x/reference/configuration

env:

dns_order: LAST,A,CNAME,AAAA,SRV

plugins: 'off'

nginx_worker_processes: 1

ingressController:

enabled: false

image: repository: k8sma:5000/kong tag: "3.6"

manager:

enabled: false

dblessConfig:

configMap: kong-dbless-config

proxy:

type: ClusterIP

http:

enabled: false

Optional Cert Manager sub-chart configuration

Enable this if you don't already have cert-manager enabled on your cluster.

cert-manager: enabled: false installCRDs: true

Optional Nginx Ingress sub-chart configuration

Enable this if you don't already have nginx-ingress enabled on your cluster.

nginx: enabled: false controller: electionID: ingress-controller-leader ingressClassResource: name: internal-nginx default: false controllerValue: k8s.io/internal-ingress-nginx service: type: ClusterIP

Extra configurations:

- manifests

- predefined roles

- prometheus

- etc...

extras:

Extra Kubernetes manifests to be deployed

manifests:

- apiVersion: v1

kind: ConfigMap

metadata:

name: additional-configmap

data:

mykey: myvalue

manifests: [] serviceMonitor: # Whether to create a Prometheus Operator service monitor. enabled: false # Here labels can be added to the serviceMonitor labels: {} # Here annotations can be added to the serviceMonitor annotations: {} # metrics.serviceMonitor.metricRelabelings Specify Metric Relabelings to add to the scrape endpoint # ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#relabelconfig metricRelabelings: [] # metrics.serviceMonitor.relabelings [array] Prometheus relabeling rules relabelings: [] # ServiceMonitor connection scheme. Defaults to HTTPS. scheme: https # ServiceMonitor connection tlsConfig. Defaults to {insecureSkipVerify:true}. tlsConfig: insecureSkipVerify: true

huanghaiqing1 avatar Apr 17 '25 08:04 huanghaiqing1

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Jul 16 '25 08:07 k8s-triage-robot

I have the same issue using this repository inside a LAN with no internet access. I mirrored the helm charts and docker repositories on my local artifactory, and then updated the image overrides, but there is no override for kong :(

  Warning  Failed           40m                  kubelet            Failed to pull image "kong:3.8": failed to pull and unpack image "docker.io/library/kong:3.8": failed to copy: httpReadSee
ker: failed open: failed to do request: Get "https://docker-images-prod.6aa30f8b08e16409b46e0173d6de2f56.r2.cloudflarestorage.com/registry-v2/docker/registry/v2/blobs/sha256/bc/bc228145b5315
a9c4e5c9894c767827433eb9517b7a763118274c74247fd4163/data?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=f1baa2dd9b876aeb89efebbfc9e5d5f4%2F20250721%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=2
0250721T235022Z&X-Amz-Expires=1200&X-Amz-SignedHeaders=host&X-Amz-Signature=74beb83a218a9d2f0d9f95ca41dcbbc7ec23f662c3256a1c7043fa8ac7ffa402": read tcp 10.x.x.x:57142->162.159.141.50:44
3: read: connection reset by peer

michaelday008 avatar Jul 22 '25 00:07 michaelday008

/remove-lifecycle stale

michaelday008 avatar Jul 22 '25 00:07 michaelday008

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

k8s-triage-robot avatar Oct 20 '25 00:10 k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

k8s-triage-robot avatar Nov 19 '25 01:11 k8s-triage-robot