kops
kops copied to clipboard
Kops 1.23.* runc hash is not right inside nodeup
/kind bug
1. What kops
version are you running? The command kops version
, will display
this information.
1.23.1 and 1.23.2
2. What Kubernetes version are you running? kubectl version
will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops
flag.
1.23.7
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
kops rolling-update --yes
5. What happened after the commands executed?
We run kops where there is no internet access. We are able to run kops without any problem till 1.22.5 version
when a new master comes up i see below error
Jun 07 15:21:42 ip-10-.ec2.internal nodeup[3095]: I0607 15:21:42.325709 3095 assetstore.go:340] added asset "runc" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/usr/local/sbin/runc"}
Jun 07 15:21:42 ip-10-.ec2.internal nodeup[3095]: I0607 15:21:42.325769 3095 files.go:136] Hash did not match for "/var/cache/nodeup/sha256:ab1c67fbcbdddbe481e48a55cf0ef9a86b38b166b5079e0010737fd87d7454bb_runc_amd64": actual=sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 vs expected=sha256:ab1c67fbcbdddbe481e48a55cf0ef9a86b38b166b5079e0010737fd87d7454bb
Jun 07 15:21:42 ip-10-***.ec2.internal nodeup[3095]: I0607 15:21:42.325820 3095 http.go:82] Downloading "https://github.com/opencontainers/runc/releases/download/v1.1.0/runc.amd64"
6. What did you expect to happen?
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: "2022-06-06T14:17:43Z"
generation: 5
name: testkops.k8s.local
spec:
api:
loadBalancer:
class: Network
type: Internal
assets:
containerProxy: ***
authorization:
rbac: {}
channel: https://***/stable
cloudLabels:
ClusterName: testkops.k8s.local
Department: ****
NonStop: "True"
Platform: ****
cloudProvider: aws
configBase: s3://****/testkops.k8s.local
containerRuntime: containerd
containerd:
packages:
hashAmd64: a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f
urlAmd64: https://****/containerd/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz
registryMirrors:
'*':
- https://****
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- encryptedVolume: true
instanceGroup: master-us-east-1a-1
name: a-1
- encryptedVolume: true
instanceGroup: master-us-east-1b-1
name: b-1
- encryptedVolume: true
instanceGroup: master-us-east-1a-2
name: a-2
manager:
env:
- name: ETCD_MANAGER_HOURLY_BACKUPS_RETENTION
value: 3d
- name: ETCD_MANAGER_DAILY_BACKUPS_RETENTION
value: 7d
memoryRequest: 100Mi
name: main
- cpuRequest: 100m
etcdMembers:
- encryptedVolume: true
instanceGroup: master-us-east-1a-1
name: a-1
- encryptedVolume: true
instanceGroup: master-us-east-1b-1
name: b-1
- encryptedVolume: true
instanceGroup: master-us-east-1a-2
name: a-2
memoryRequest: 100Mi
name: events
fileAssets:
- content: "apiVersion: audit.k8s.io/v1\nkind: Policy\nrules:\n- level: RequestResponse\n
\ resources:\n - group: \"\"\n resources: [\"deployments\"]\n- level: Request\n
\ resources:\n - group: \"\"\n resources: [\"configmaps\"]\n namespaces:
[\"kube-system\"]\n- level: Metadata\n resources:\n - group: \"\"\n resources:
[\"configmaps\", \"secrets\"] \n"
name: audit-policy-config
path: /srv/kubernetes/kube-apiserver/audit-policy-config.yaml
roles:
- Master
iam:
allowContainerRegistry: true
legacy: false
kubeAPIServer:
admissionControl:
- AlwaysPullImages
- NodeRestriction
auditLogMaxAge: 5
auditLogMaxBackups: 2
auditLogMaxSize: 100
auditLogPath: /var/log/kube-apiserver-audit.log
auditPolicyFile: /srv/kubernetes/kube-apiserver/audit-policy-config.yaml
enableProfiling: false
oidcClientID: kubernetes-client
oidcGroupsClaim: groups
oidcGroupsPrefix: 'oidc:'
oidcIssuerURL: https://*****/auth/realms/master
oidcUsernameClaim: preferred_username
oidcUsernamePrefix: 'oidc:'
kubeControllerManager:
clusterCIDR: 172.21.128.0/17
enableProfiling: false
terminatedPodGCThreshold: 5
kubeDNS:
coreDNSImage: ****/coredns/coredns:v1.9.1
memoryLimit: 2Gi
provider: CoreDNS
kubeProxy:
clusterCIDR: 172.21.128.0/17
kubeScheduler:
enableProfiling: false
kubelet:
anonymousAuth: false
authenticationTokenWebhook: true
authorizationMode: Webhook
eventQPS: 0
protectKernelDefaults: true
readOnlyPort: 0
kubernetesApiAccess:
- 0.0.0.0/0
kubernetesVersion: https://*****/repository/kubernetes-releases/release/v1.23.7
masterInternalName: api.internal.testkops.k8s.local
masterPublicName: api.testkops.k8s.local
networkCIDR: ****24
networkID: vpc-****
networking:
calico:
awsSrcDstCheck: Disable
crossSubnet: true
encapsulationMode: ipip
typhaReplicas: 3
nonMasqueradeCIDR: 172.21.0.0/16
sshAccess:
- 0.0.0.0/0
sshKeyName: ****
subnets:
- cidr: ****/25
id: subnet-*****
name: us-east-1a
type: Private
zone: us-east-1a
- cidr: **/25
id: subnet-**
name: us-east-1b
type: Private
zone: us-east-1b
- cidr: *****/25
id: subnet-**
name: utility-us-east-1a
type: Utility
zone: us-east-1a
- cidr: ****/25
id: subnet-****
name: utility-us-east-1b
type: Utility
zone: us-east-1b
topology:
dns:
type: Private
masters: private
nodes: private
8. Please run the commands with most verbose logging by adding the -v 10
flag.
Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know? Our kops cluster runs in offline mode without any internet access
Please try setting containerd.version: 1.6.6
. kOps has no way of knowing the version of the containerd
package.
I have added the version as below
containerd:
version: 1.6.6
packages:
hashAmd64: a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f
urlAmd64: https://*nexus**/repository/**/containerd/v1.6.6/cri-containerd-cni-1.6.6-linux-amd64.tar.gz
hashArm64: 8f5a4a2fdaa2891bd05221579dda1f9cdbddb3ed5fd6d9dc673ba13dffe48436
urlArm64: https://*nexus*/repository/**/containerd/v1.6.6/cri-containerd-cni-1.6.6-linux-arm64.tar.gz
still getting the same error.
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.687661 3049 files.go:133] Hash matched for "/var/cache/nodeup/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz": sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.687716 3049 assetstore.go:272] added asset "cri-containerd-cni-1.6.6-linux-amd64.tar.gz" for &{"/var/cache/nodeup/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.687870 3049 assetstore.go:340] added asset "10-containerd-net.conflist" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/etc/cni/net.d/10-containerd-net.conflist"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.687897 3049 assetstore.go:340] added asset "crictl.yaml" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/etc/crictl.yaml"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.687961 3049 assetstore.go:340] added asset "containerd.service" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/etc/systemd/system/containerd.service"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688066 3049 assetstore.go:340] added asset "bandwidth" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/bandwidth"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688090 3049 assetstore.go:340] added asset "bridge" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/bridge"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688225 3049 assetstore.go:340] added asset "dhcp" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/dhcp"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688252 3049 assetstore.go:340] added asset "firewall" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/firewall"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688271 3049 assetstore.go:340] added asset "host-device" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/host-device"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688311 3049 assetstore.go:340] added asset "host-local" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/host-local"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688335 3049 assetstore.go:340] added asset "ipvlan" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/ipvlan"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688355 3049 assetstore.go:340] added asset "loopback" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/loopback"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688374 3049 assetstore.go:340] added asset "macvlan" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/macvlan"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688399 3049 assetstore.go:340] added asset "portmap" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/portmap"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688422 3049 assetstore.go:340] added asset "ptp" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/ptp"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688462 3049 assetstore.go:340] added asset "sbr" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/sbr"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688489 3049 assetstore.go:340] added asset "static" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/static"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688509 3049 assetstore.go:340] added asset "tuning" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/tuning"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688530 3049 assetstore.go:340] added asset "vlan" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/vlan"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688566 3049 assetstore.go:340] added asset "vrf" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/cni/bin/vrf"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688677 3049 assetstore.go:340] added asset "master.yaml" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/containerd/cluster/gce/cloud-init/master.yaml"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688707 3049 assetstore.go:340] added asset "node.yaml" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/containerd/cluster/gce/cloud-init/node.yaml"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688732 3049 assetstore.go:340] added asset "cni.template" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/containerd/cluster/gce/cni.template"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688834 3049 assetstore.go:340] added asset "configure.sh" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/containerd/cluster/gce/configure.sh"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688862 3049 assetstore.go:340] added asset "env" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/containerd/cluster/gce/env"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688887 3049 assetstore.go:340] added asset "version" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/opt/containerd/cluster/version"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.688983 3049 assetstore.go:340] added asset "containerd" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/usr/local/bin/containerd"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.689009 3049 assetstore.go:340] added asset "containerd-shim" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/usr/local/bin/containerd-shim"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.689031 3049 assetstore.go:340] added asset "containerd-shim-runc-v1" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/usr/local/bin/containerd-shim-runc-v1"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.689059 3049 assetstore.go:340] added asset "containerd-shim-runc-v2" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/usr/local/bin/containerd-shim-runc-v2"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.689082 3049 assetstore.go:340] added asset "containerd-stress" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/usr/local/bin/containerd-stress"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.689181 3049 assetstore.go:340] added asset "crictl" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/usr/local/bin/crictl"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.689208 3049 assetstore.go:340] added asset "critest" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/usr/local/bin/critest"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.689229 3049 assetstore.go:340] added asset "ctd-decoder" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/usr/local/bin/ctd-decoder"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.689252 3049 assetstore.go:340] added asset "ctr" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/usr/local/bin/ctr"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.689301 3049 assetstore.go:340] added asset "runc" for &{"/var/cache/nodeup/extracted/sha256:a64568c8ce792dd73859ce5f336d5485fcbceab15dc3e06d5d1bc1c3353fa20f_cri-containerd-cni-1_6_6-linux-amd64_tar_gz/usr/local/sbin/runc"}
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.689361 3049 files.go:136] Hash did not match for "/var/cache/nodeup/sha256:ab1c67fbcbdddbe481e48a55cf0ef9a86b38b166b5079e0010737fd87d7454bb_runc_amd64": actual=sha256:e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855 vs expected=sha256:ab1c67fbcbdddbe481e48a55cf0ef9a86b38b166b5079e0010737fd87d7454bb
Jun 08 14:41:04 ip-***.ec2.internal nodeup[3049]: I0608 14:41:04.689408 3049 http.go:82] Downloading "https://github.com/opencontainers/runc/releases/download/v1.1.0/runc.amd64"
Jun 08 14:41:34 ip-***.ec2.internal nodeup[3049]: W0608 14:41:34.691505 3049 assetstore.go:251] error downloading url "https://github.com/opencontainers/runc/releases/download/v1.1.0/runc.amd64": error doing HTTP fetch of "https://github.com/opencontainers/runc/releases/download/v1.1.0/runc.amd64": Get "https://github.com/opencontainers/runc/releases/download/v1.1.0/runc.amd64": dial tcp 140.82.113.4:443: i/o timeout
Jun 08 14:41:34 ip-***.ec2.internal nodeup[3049]: W0608 14:41:34.691556 3049 main.go:133] got error running nodeup (will retry in 30s): error adding asset "ab1c67fbcbdddbe481e48a55cf0ef9a86b38b166b5079e0010737fd87d7454bb@https://github.com/opencontainers/runc/releases/download/v1.1.0/runc.amd64": error doing HTTP fetch of "https://github.com/opencontainers/runc/releases/download/v1.1.0/runc.amd64": Get "https://github.com/opencontainers/runc/releases/download/v1.1.0/runc.amd64": dial tcp 140.82.113.4:443: i/o timeout
i can see the path of runc is part of assets with github url. Is there any way to override that?
I think this approach may be of more help than what you are doing at the moment: https://kops.sigs.k8s.io/operations/asset-repository
There is no way to override the runc
url and hash at the moment as there is for the containerd package, but would be happy to review a PR that adds that.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.