Hetzner: error running task "ServerGroup/bastions": Field is required: UserData
/kind bug
1. What kops version are you running? The command kops version, will display
this information.
Client version: 1.29.0 (git-v1.29.0)
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
v1.29.3
3. What cloud provider are you using?
Hetzner
4. What commands did you run? What is the simplest way to reproduce this issue?
kops create -f kops.yamlkops update cluster --name kops.hetznerpoc.mydomain.net --yes
5. What happened after the commands executed?
Error while setting up the bastion host:
W0604 10:18:59.832186 23422 executor.go:141] error running task "ServerGroup/bastions" (9m59s remaining to succeed): Field is required: UserData
6. What did you expect to happen?
A bastion host is being created
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
kops.yaml used with the kops create -f command:
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: null
name: kops.hetznerpoc.mydomain.net
spec:
api:
loadBalancer:
type: Public
authorization:
rbac: {}
channel: stable
cloudProvider: hetzner
configBase: s3://kops/kops.hetznerpoc.mydomain.net
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- instanceGroup: control-plane-nbg1
name: etcd-1
manager:
backupRetentionDays: 90
memoryRequest: 100Mi
name: main
- cpuRequest: 100m
etcdMembers:
- instanceGroup: control-plane-nbg1
name: etcd-1
manager:
backupRetentionDays: 90
memoryRequest: 100Mi
name: events
iam:
allowContainerRegistry: true
legacy: false
kubelet:
anonymousAuth: false
kubernetesApiAccess:
- 0.0.0.0/0
- ::/0
kubernetesVersion: v1.29.3
networkCIDR: 10.2.0.0/24
networking:
calico: {}
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
- 0.0.0.0/0
- ::/0
subnets:
- name: nbg1
type: Private
zone: nbg1
- name: utility-nbg1
type: Utility
zone: nbg1
topology:
dns:
type: None
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: null
labels:
kops.k8s.io/cluster: kops.hetznerpoc.mydomain.net
name: control-plane-nbg1
spec:
image: ubuntu-22.04
machineType: cx31
maxSize: 0
minSize: 0
role: Master
subnets:
- nbg1
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: null
labels:
kops.k8s.io/cluster: kops.hetznerpoc.mydomain.net
name: bastions
spec:
image: ubuntu-22.04
machineType: cx11
maxSize: 1
minSize: 1
role: Bastion
subnets:
- nbg1
---
apiVersion: kops.k8s.io/v1alpha2
kind: SSHCredential
metadata:
creationTimestamp: null
labels:
kops.k8s.io/cluster: kops.hetznerpoc.mydomain.net
name: admin
spec:
publicKey: ssh-ed25519 ...
8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
I pasted only the (hopefully) relevant log section where the "UserData" is set to null.
If other log output is required I would need to remove sensitive information first.
I0604 10:28:06.720121 23985 executor.go:113] Tasks: 44 done / 45 total; 1 can run
I0604 10:28:06.720279 23985 executor.go:214] Executing task "ServerGroup/bastions": *hetznertasks.ServerGroup {"Name":"bastions","Lifecycle":"Sync","SSHKeys":[{"Name":"kops.hetznerpoc.mydomain.net-...","Lifecycle":"Sync","ID":21122430,"PublicKey":"ssh-ed25519 ...","Labels":{"kops.k8s.io/cluster":"kops.hetznerpoc.mydomain.net"}}],"Network":{"Name":"kops.hetznerpoc.mydomain.net","Lifecycle":"Sync","ID":"4301340","Region":"eu-central","IPRange":"10.2.0.0/24","Subnets":["10.2.0.0/24"],"Labels":{"kops.k8s.io/cluster":"kops.hetznerpoc.mydomain.net"}},"Count":1,"NeedUpdate":null,"Location":"nbg1","Size":"cx11","Image":"ubuntu-22.04","EnableIPv4":true,"EnableIPv6":false,"UserData":null,"Labels":{"kops.k8s.io/cluster":"kops.hetznerpoc.mydomain.net","kops.k8s.io/instance-group":"bastions","kops.k8s.io/instance-role":"Bastion"}}
W0604 10:28:07.063966 23985 executor.go:141] error running task "ServerGroup/bastions" (9m45s remaining to succeed): Field is required: UserData
I0604 10:28:07.064088 23985 executor.go:171] Continuing to run 1 task(s)
9. Anything else do we need to know?
Trying to setup a cluster with private topology and bastion host on Hetzner.
I specifically set the control plane size to 0 to focus on the Bastion deployment. That being said, I of course also tried the deployment with an existing control plane InstanceGroup. The behaviour is the same.
By reading the documentation I was under the impression that the bastion feature should already work with Hetzner. So creating this as a kind: bug.
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle rotten - Close this issue with
/close - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied - After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied - After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closed
You can:
- Reopen this issue with
/reopen - Mark this issue as fresh with
/remove-lifecycle rotten - Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
@k8s-triage-robot: Closing this issue, marking it as "Not Planned".
In response to this:
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.
This bot triages issues according to the following rules:
- After 90d of inactivity,
lifecycle/staleis applied- After 30d of inactivity since
lifecycle/stalewas applied,lifecycle/rottenis applied- After 30d of inactivity since
lifecycle/rottenwas applied, the issue is closedYou can:
- Reopen this issue with
/reopen- Mark this issue as fresh with
/remove-lifecycle rotten- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/close not-planned
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.