sig-windows-tools
sig-windows-tools copied to clipboard
Unable to deploy flannel
Describe the bug Whenever I try to deploy flannel it keeps crash looping with the following error:
W0104 05:45:01.747597 3296 client_config.go:617] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work. W0104 05:45:01.748130 3296 client_config.go:622] error creating inClusterConfig, falling back to default config: open /var/run/secrets/kubernetes.io/serviceaccount/token: The system cannot find the path specified. E0104 05:45:01.749443 3296 main.go:226] Failed to create SubnetManager: fail to create kubernetes config: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable
Setting the KUBERNETES_MASTER env variable does nothing.
To Reproduce Build the flannel image, adjust the place-holders in the deployment file and apply it.
Expected behavior No crash loop
Kubernetes (please complete the following information):
- Windows Server version: Windows Server 2019 Datacenter Evaluation
- Kernel: 10.0.17763.737
- Kubernetes Version: 1.27.3
- CNI: flannel 0.24.0
- Containerd: 1.7.6
Additional info It's a mixed cluster (linux & windows) Same issue when using Containerd 1.7.11
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
The Kubernetes project currently lacks enough contributors to adequately respond to all issues.
This bot triages un-triaged issues according to the following rules:
- After 90d of inactivity,
lifecycle/stale
is applied - After 30d of inactivity since
lifecycle/stale
was applied,lifecycle/rotten
is applied - After 30d of inactivity since
lifecycle/rotten
was applied, the issue is closed
You can:
- Mark this issue as fresh with
/remove-lifecycle stale
- Close this issue with
/close
- Offer to help out with Issue Triage
Please send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale
/remove-lifecycle stale
Any solution to this problem? I am having the same issue.
Bump. Ran into the same issue today.
After some searching around, this looks to be the actual issue. Posting for anyone who lands here.
https://github.com/kubernetes/kubernetes/issues/104562
Found the problem. Turns out, the hard-coded /var/run/secrets/kubernetes.io/serviceaccount/
is not supported in contained 1.6. So just install contained 1.7+ and you should be good to go.
References:
- https://github.com/kubernetes/client-go/issues/1302#issuecomment-1751590182
- https://kubernetes.io/docs/tasks/configure-pod-container/create-hostprocess-pod/#volume-mounts
Edit:
Seems like OP was using 1.7 so I don't know then. Updating to 1.7 fixed it for me.