k3d
k3d copied to clipboard
[BUG] /var/lib/kubelet and /run isn't shared.
What did you do
- How was the cluster created?
./k3d cluster create k3s \
-s3 \
-a24 \
-v /mnt/user/k8s:/var/lib/rancher/k3s/storage:slave \
--k3s-arg '--service-node-port-range=1-65535@server:*' \
--k3s-arg '--disable=traefik@server:*' \
--verbose
- What did you do afterwards?
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: eh
name: eh
spec:
selector:
matchLabels:
app: eh
template:
metadata:
labels:
app: eh
spec:
containers:
- command:
- /usr/bin/ash
- -c
- --
image: alpine:latest
name: noooooo
volumeMounts:
- mountPath: /k3d-doesnt-share
mountPropagation: Bidirectional
name: kthreed-isnt-shared
volumes:
- name: kthreed-isnt-shared
What did you expect to happen
The pod to be created.
Screenshots or terminal output
(combined from similar events): Error: failed to generate container "a14deab0ece922aac94df76e3c8378467375fdfe7adb2fad93be09e250df6b9d" spec: failed to generate spec: path "/var/lib/kubelet/pods/71101e97-a823-4157-b6c8-92ab50498f59/volumes/kubernetes.io~empty-dir/kthreed-isnt-shared" is mounted on "/var/lib/kubelet" but it is not a shared mount
Which OS & Architecture
- Linux
Which version of k3d
k3d version v5.4.1
k3s version v1.22.7-k3s1 (default)
I've gotten Cilium running in pure eBPF mode. I had to hack the k3d filesystem again though.
Warning Failed 47s kubelet Error: failed to generate container "d188af429270e2234add1b3d0506ca07f9fe82cdef73da381c70014d941ca175" spec: failed to generate spec: path "/run/cilium/cgroupv2" is mounted on "/run/cilium/cgroupv2" but it is not a shared or slave mount
Is there any additional information I can provide on this issue? it seems to be as simple as making the volume rshared.
Hi @KyleSanderson thanks for opening this issue!
I cannot really debug this with the deployment you shared, as I'm missing the PVC/PV definition.
However, since your follow-up comment is about Cilium, can you share your configuration there that you want to get running?
Important note though: some paths in k3d are tmpfs: Tmpfs: map[string]string{"/run": "", "/var/run": ""},
@iwilltry42 it's pretty simple to fix, docker exec into the container and run mount --make-rshared /
. Super annoying to have to do this every reboot.
Any update on this critical for usage issue?
I ran into this issue with Cilium on k3d too. Would be nice to have a permanent solution.
I am also having a similar problem. (k3d version v5.4.3, WSL2)
Every time I try this:
k3d cluster create my-cluster --agents 5 --volume /tmp/longhorn:/var/lib/longhorn:shared
I get the following error:
Failed Cluster Start: Failed to start server k3d-my-cluster-server-0: runtime failed to start node 'k3d-my-cluster-server-0': docker failed to start container for node 'k3d-my-cluster-server-0': Error response from daemon: path /tmp/longhorn is mounted on / but it is not a shared mount
same here, i've this config.yaml excerpt:
volumes: # repeatable flags are represented as YAML lists
- volume: $DOMAIN_STORAGE/rwo:/var/lib/rancher/k3s/storage # same as `--volume '/my/host/path:/path/in/node@server:0;agent:*'`
nodeFilters:
- all
- volume: $DOMAIN_STORAGE/rwx:/var/lib/csi-local-hostpath:shared # same as `--volume '/my/host/path:/path/in/node@server:0;agent:*'`
nodeFilters:
- all
and i get this while creating the cluster:
INFO[0003] Starting Node 'k3d-dom1-server-0'
ERRO[0003] Failed Cluster Start: Failed to start server k3d-dom1-server-0: runtime failed to start node 'k3d-dom1-server-0': docker failed to start container for node 'k3d-dom1-server-0': Error response from daemon: path /host_mnt/Users/fragolinux/local-work/domains/dom1/env1/inst1/storage/rwx is mounted on /host_mnt but it is not a shared mount
ERRO[0003] Failed to create cluster >>> Rolling Back
INFO[0003] Deleting cluster 'dom1'
INFO[0003] Deleting 2 attached volumes...
WARN[0003] Failed to delete volume 'k3d-dom1-images' of cluster 'dom1': failed to find volume 'k3d-dom1-images': Error: No such volume: k3d-dom1-images -> Try to delete it manually
FATA[0003] Cluster creation FAILED, all changes have been rolled back!
FATA[0000] No nodes found for given cluster
i'm on macos big sur latest
@iwilltry42 is there anything I can do to help on this critical issue.
I have a similar issue with K3D. It is occurring when trying to install ebs-csi-driver. My error message:
Error: failed to generate container "91732fcdedb92de76ee4ba5e30e5804fa1bb79512dcdc74910cc196fed1f12d7" spec: failed to generate spec: path "/var/lib/kubelet" is mounted on "/var/lib/kubelet" but it is not a shared mount
I run into this issue when trying to run the new Istio Ambient Mesh on k3d - the istio CNI can't be installed:
Warning Failed 5m3s (x3 over 5m31s) kubelet (combined from similar events): Error: failed to generate container "0c8b3866c4e954ef11ced4d6ce91c82386cc484e0050d3a54d308c6121f99783" spec: failed to generate spec: path "/var/run/netns" is mounted on "/var/run" but it is not a shared or slave mount
To reproduce, launch a k3d cluster with the following config:
apiVersion: k3d.io/v1alpha4
kind: Simple
metadata:
name: istio-ambient
servers: 1
agents: 2
kubeAPI:
hostPort: "6550"
ports:
- port: 9080:80
nodeFilters:
- loadbalancer
- port: 9443:443
nodeFilters:
- loadbalancer
options:
k3s:
extraArgs:
- arg: "--disable=traefik"
nodeFilters:
- server:*
and then install istio in ambient mesh mode using istioctl install --set profile=ambient --skip-confirmation
, istioctl version 1.18.0-alpha.0
.
Additional info:
k3d version v5.4.7
k3s version v1.25.6-k3s1 (default)
Host system is macOS ventura 13.3.1
I have an enhancement up here, which made Cilium installation work for me: https://github.com/k3d-io/k3d/pull/1268 Feel free to test (build from PR branch) and/or review :+1:
Fixed by #1268