lvm-localpv
lvm-localpv copied to clipboard
Container does not start, fsGroup warning.
What steps did you take and what happened: [A clear and concise description of what the bug is, and what commands you ran.] Create container using lvm storage class, container doesn't start. Am using fsGroup and see warning but otherwise not sure what's going on here.
What did you expect to happen: container successfully created.
The output of the following commands will help us better understand what's going on: (Pasting long output into a GitHub gist or other Pastebin is fine.)
kubectl logs -f openebs--lvm-controller-0 -n kube-system -c openebs-lvm-pluginkubectl logs -f openebs-lvm-node-[xxxx] -n kube-system -c openebs-lvm-pluginkubectl get pods -n kube-systemkubectl get lvmvol -A -o yaml
https://pastebin.com/9BypAhdR
Anything else you would like to add: [Miscellaneous information that will assist in solving the issue.]
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: openebs-lvmpv
parameters:
storage: "lvm"
volgroup: "vg-microk8s-storage"
fsType: ext4
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
provisioner: local.csi.openebs.io
Container:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: grafana-test-pvc
namespace: grafana
labels:
app: grafana-test
spec:
storageClassName: openebs-lvmpv
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana-test
name: grafana-test
namespace: grafana
spec:
selector:
matchLabels:
app: grafana-test
template:
metadata:
labels:
app: grafana-test
spec:
securityContext:
fsGroup: 472
supplementalGroups:
- 0
containers:
- name: grafana-test
image: grafana/grafana:10.3.3
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: http-grafana
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /robots.txt
port: 3000
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 2
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 3000
timeoutSeconds: 1
resources:
requests:
cpu: 250m
memory: 750Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-test-data
volumes:
- name: grafana-test-data
persistentVolumeClaim:
claimName: grafana-test-pvc
PVC:
kubectl describe pvc grafana-test-pvc -n grafana
Name: grafana-test-pvc
Namespace: grafana
StorageClass: openebs-lvmpv
Status: Bound
Volume: pvc-13a5885b-e7ed-48fe-bf2d-97b172ff20b3
Labels: app=grafana-test
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: local.csi.openebs.io
volume.kubernetes.io/selected-node: dirt4
volume.kubernetes.io/storage-provisioner: local.csi.openebs.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: grafana-test-5bdf458c6-dfkt5
grafana-test-66cfd9fd8c-k7z8k
Events: <none>
Container describe:
Name: grafana-test-66cfd9fd8c-k7z8k
Namespace: grafana
Priority: 0
Service Account: default
Node: dirt4/192.168.1.157
Start Time: Fri, 01 Mar 2024 09:22:28 -0500
Labels: app=grafana-test
pod-template-hash=66cfd9fd8c
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/grafana-test-66cfd9fd8c
Containers:
grafana-test:
Container ID:
Image: grafana/grafana:10.3.3
Image ID:
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 250m
memory: 750Mi
Liveness: tcp-socket :3000 delay=30s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:3000/robots.txt delay=10s timeout=2s period=30s #success=1 #failure=3
Environment: <none>
Mounts:
/var/lib/grafana from grafana-test-data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xtt6g (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
grafana-test-data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: grafana-test-pvc
ReadOnly: false
kube-api-access-xtt6g:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Environment:
- LVM Driver version: 1.4.0
- Kubernetes version (use
kubectl version): Client Version: v1.28.7 Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3 Server Version: v1.28.7 - Kubernetes installer & version: microk8s v1.28.7 canonical✓ classic
- Cloud provider or hardware configuration: baremetal
- OS (e.g. from
/etc/os-release): Ubuntu 23.10
Hey @chase1124, I see this:
I0301 14:51:05.211644 1 mount.go:194] lvm : already mounted vg-microk8s-storage/pvc-13a5885b-e7ed-48fe-bf2d-97b172ff20b3 => /var/snap/microk8s/common/var/lib/kubelet/pods/5559c4f0-12a6-444f-bd90-d87f1ee65876/volumes/kubernetes.io~csi/pvc-13a5885b-e7ed-48fe-bf2d-97b172ff20b3/mount
Can you check by any chance you are trying to use same volume for two different pods ? if you want shared volume you have to give shared: yes in storage class parameter
Please confirm if this is still an issue. If not, requesting you to close this.
Closing this as there has been no response. Re-open if observed again.