kops
kops copied to clipboard
Can't create kOps cluster on Digital Ocean cloud platform: "error creating droplet with name"
/kind bug
1. What kops
version are you running? The command kops version
, will display
this information.
1.24.1
2. What Kubernetes version are you running? kubectl version
will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops
flag.
3. What cloud provider are you using?
Digital Ocean
4. What commands did you run? What is the simplest way to reproduce this issue?
kops update cluster --name=c1.cbttrevor.com --yes
I0807 17:49:17.047312 28924 s3context.go:90] Found S3_ENDPOINT="sfo3.digitaloceanspaces.com", using as non-AWS S3 backend
I0807 17:49:37.439077 28924 executor.go:111] Tasks: 0 done / 45 total; 35 can run
W0807 17:49:37.616532 28924 vfs_castore.go:379] CA private key was not found
I0807 17:49:37.729907 28924 keypair.go:225] Issuing new certificate: "etcd-manager-ca-main"
I0807 17:49:37.729907 28924 keypair.go:225] Issuing new certificate: "etcd-manager-ca-events"
I0807 17:49:37.729907 28924 keypair.go:225] Issuing new certificate: "etcd-peers-ca-main"
I0807 17:49:37.844950 28924 keypair.go:225] Issuing new certificate: "etcd-clients-ca"
I0807 17:49:37.860498 28924 keypair.go:225] Issuing new certificate: "etcd-peers-ca-events"
I0807 17:49:37.860498 28924 keypair.go:225] Issuing new certificate: "apiserver-aggregator-ca"
W0807 17:49:37.982185 28924 vfs_castore.go:379] CA private key was not found
I0807 17:49:38.097638 28924 keypair.go:225] Issuing new certificate: "kubernetes-ca"
I0807 17:49:38.105293 28924 keypair.go:225] Issuing new certificate: "service-account"
I0807 17:49:40.200014 28924 executor.go:111] Tasks: 35 done / 45 total; 5 can run
I0807 17:49:40.419955 28924 keypair.go:225] Issuing new certificate: "kubelet"
I0807 17:49:40.427261 28924 keypair.go:225] Issuing new certificate: "kube-proxy"
I0807 17:49:40.779120 28924 executor.go:111] Tasks: 40 done / 45 total; 3 can run
W0807 17:49:46.357736 28924 executor.go:139] error running task "Droplet/master-sfo3-1.masters.c1.cbttrevor.com" (9m54s remaining to succeed): Error creating droplet with Name=master-sfo3-1.masters.c1.cbttrevor.com
I0807 17:49:46.357736 28924 executor.go:111] Tasks: 42 done / 45 total; 3 can run
W0807 17:49:51.990725 28924 executor.go:139] error running task "Droplet/nodes-sfo3.c1.cbttrevor.com" (9m54s remaining to succeed): Error creating droplet with Name=nodes-sfo3.c1.cbttrevor.com
W0807 17:49:51.990725 28924 executor.go:139] error running task "Droplet/master-sfo3-1.masters.c1.cbttrevor.com" (9m48s remaining to succeed): Error creating droplet with Name=master-sfo3-1.masters.c1.cbttrevor.com
I0807 17:49:51.990725 28924 executor.go:111] Tasks: 43 done / 45 total; 2 can run
W0807 17:49:58.315287 28924 executor.go:139] error running task "Droplet/nodes-sfo3.c1.cbttrevor.com" (9m48s remaining to succeed): Error creating droplet with Name=nodes-sfo3.c1.cbttrevor.com
W0807 17:49:58.315287 28924 executor.go:139] error running task "Droplet/master-sfo3-1.masters.c1.cbttrevor.com" (9m42s remaining to succeed): Error creating droplet with Name=master-sfo3-1.masters.c1.cbttrevor.com
I0807 17:49:58.315287 28924 executor.go:155] No progress made, sleeping before retrying 2 task(s)
I0807 17:50:08.315506 28924 executor.go:111] Tasks: 43 done / 45 total; 2 can run
W0807 17:50:14.665869 28924 executor.go:139] error running task "Droplet/nodes-sfo3.c1.cbttrevor.com" (9m31s remaining to succeed): Error creating droplet with Name=nodes-sfo3.c1.cbttrevor.com
W0807 17:50:14.665869 28924 executor.go:139] error running task "Droplet/master-sfo3-1.masters.c1.cbttrevor.com" (9m26s remaining to succeed): Error creating droplet with Name=master-sfo3-1.masters.c1.cbttrevor.com
I0807 17:50:14.665869 28924 executor.go:155] No progress made, sleeping before retrying 2 task(s)
I0807 17:50:24.666482 28924 executor.go:111] Tasks: 43 done / 45 total; 2 can run
5. What happened after the commands executed?
See above
6. What did you expect to happen?
Cluster is created successfully
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: "2022-08-07T23:48:33Z"
name: c1.cbttrevor.com
spec:
api:
dns: {}
authorization:
rbac: {}
channel: stable
cloudProvider: digitalocean
configBase: do://cbtnuggets/c1.cbttrevor.com
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- instanceGroup: master-sfo3-1
name: etcd-1
memoryRequest: 100Mi
name: main
- cpuRequest: 100m
etcdMembers:
- instanceGroup: master-sfo3-1
name: etcd-1
memoryRequest: 100Mi
name: events
iam:
allowContainerRegistry: true
legacy: false
kubelet:
anonymousAuth: false
kubernetesApiAccess:
- 0.0.0.0/0
- ::/0
kubernetesVersion: 1.24.3
masterPublicName: api.c1.cbttrevor.com
networking:
kubenet: {}
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
- 0.0.0.0/0
- ::/0
subnets:
- name: sfo3
region: sfo3
type: Public
zone: sfo3
topology:
dns:
type: Public
masters: public
nodes: public
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-08-07T23:48:33Z"
labels:
kops.k8s.io/cluster: c1.cbttrevor.com
name: master-sfo3-1
spec:
image: ubuntu-20-04-x64
machineType: s-2vcpu-4gb
manager: CloudGroup
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-sfo3-1
role: Master
subnets:
- sfo3
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2022-08-07T23:48:33Z"
labels:
kops.k8s.io/cluster: c1.cbttrevor.com
name: nodes-sfo3
spec:
image: ubuntu-20-04-x64
machineType: s-2vcpu-4gb
manager: CloudGroup
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: nodes-sfo3
role: Node
subnets:
- sfo3
8. Please run the commands with most verbose logging by adding the -v 10
flag.
Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?
@pcgeek86 - can you please run with verbose -v=10
and paste the report here?
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle stale
- Mark this issue or PR as rotten with
/lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues and PRs according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue or PR as fresh with
/remove-lifecycle rotten
- Close this issue or PR with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied- After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied- After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closedYou can:
- Reopen this issue with
/reopen
- Mark this issue as fresh with
/remove-lifecycle rotten
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.