local-path-provisioner
local-path-provisioner copied to clipboard
pv alway store data in one node
I have two node one is k3s-master and the another is k3s-node when i install the deployment local-path-provisioner ,it was schedulered to k3s-node,and i create a pvc with a pod to use it,but the pv always store data in k3s-master, how can i modify it to store data in k3s-node
The same volume would always store the data in the same node.
And where the data was stored was determined by where is the pod scheduled. You can modify the config (https://github.com/rancher/local-path-provisioner#configuration ) to disable the master node, but that also means if a new pod was scheduled to master, it won't able to get storage allocated.
I have already modify the config to let k3s-node store the pv
[root@k3s-master ~]# kubectl get cm -n local-path-storage
NAME DATA AGE
local-path-config 1 41m
[root@k3s-master ~]# kubectl describe cm local-path-config -n local-path-storage
Name: local-path-config
Namespace: local-path-storage
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","data":{"config.json":"{\n \"nodePathMap\":[\n {\n \"node\":\"k3s-node\",\n ...
Data
====
config.json:
----
{
"nodePathMap":[
{
"node":"k3s-node",
"paths":["/opt/local-path-provisioner"]
}
]
}
Events: <none>
when i deploy a pod it always pending
[root@k3s-master ~]# kubectl get pod --all-namespaces -owide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system helm-install-traefik-8f4hh 0/1 Completed 0 6h31m 10.42.0.2 k3s-master <none> <none>
kube-system metrics-server-6d684c7b5-9bl4m 1/1 Running 0 6h31m 10.42.0.5 k3s-master <none> <none>
kube-system svclb-traefik-flfs4 3/3 Running 0 6h31m 10.42.0.7 k3s-master <none> <none>
kube-system coredns-d798c9dd-glj5z 1/1 Running 0 6h31m 10.42.0.4 k3s-master <none> <none>
kube-system traefik-65bccdc4bd-s8bgl 1/1 Running 0 6h31m 10.42.0.6 k3s-master <none> <none>
kube-system svclb-traefik-scrjm 3/3 Running 0 3h57m 10.42.1.2 k3s-node <none> <none>
default volume-test-2 0/1 Pending 0 42m <none> <none> <none> <none>
local-path-storage local-path-provisioner-56db8cbdb5-hnd24 1/1 Running 1 43m 10.42.1.7 k3s-node <none> <none>
and i see the log of ocal-path-provisioner it shows the config doesn't contain node k3s-master
ERROR: logging before flag.Parse: I1126 10:01:05.317340 1 controller.go:927] provision "default/local-path-pvc" class "local-path": started
ERROR: logging before flag.Parse: W1126 10:01:05.330711 1 controller.go:686] Retrying syncing claim "default/local-path-pvc" because failures 4 < threshold 15
ERROR: logging before flag.Parse: E1126 10:01:05.330872 1 controller.go:701] error syncing claim "default/local-path-pvc": failed to provision volume with StorageClass "local-path": config doesn't contain node k3s-master, and no DEFAULT_PATH_FOR_NON_LISTED_NODES available
ERROR: logging before flag.Parse: I1126 10:01:05.331330 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc", UID:"b3292bce-2925-49f1-9017-b9bcb739cf25", APIVersion:"v1", ResourceVersion:"19173", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/local-path-pvc"
ERROR: logging before flag.Parse: I1126 10:01:05.331388 1 event.go:221] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"local-path-pvc", UID:"b3292bce-2925-49f1-9017-b9bcb739cf25", APIVersion:"v1", ResourceVersion:"19173", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "local-path": config doesn't contain node k3s-master, and no DEFAULT_PATH_FOR_NON_LISTED_NODES available
As I said, Kubernetes decides where to put the pod first, then storage follows, not the other way around. The pod was scheduled to the k3s-master, so the storage must be there. Since there is no storage available on k3s-master, so the provisioning failed.
a pod was first schedulered to k8s-master,then the pv will be created on that node,then i redploy the pod if i want it run correctly it must still be schedulered to k8s-master, if it was schedulered to k8s-node there will be no pv for it to run. is that true?
Yes. After the storage created, Kubernetes will always try schedule the pod close to the storage.
ok ,thanks!