Derek Su

Results 1074 comments of Derek Su

@hookak I cannot reproduce it in our lab env. Can you provide a support bundle? Thank you.

@hookak The PV spec of the volume `pvc-2ff9fb7b-2d37-4d76-a70b-626f5675fbb4` doesn't have any `dataLocality` field. ``` csi: driver: driver.longhorn.io fsType: xfs volumeAttributes: dataEngine: v2 fsType: xfs numberOfReplicas: "1" staleReplicaTimeout: "2880" storage.kubernetes.io/csiProvisionerIdentity: 1724834934559-4074-driver.longhorn.io...

Got it. The changes in a recreated storage class are unable to apply to the existing PVs and longhorn volumes. It is a limitation of kubernetes.

Duplicated in https://github.com/longhorn/longhorn/issues/9371

``` time="2023-02-04T07:06:36Z" level=debug msg="Applied config: {\"nodePathMap\":[{\"node\":\"DEFAULT_PATH_FOR_NON_LISTED_NODES\",\"paths\":[\"/mnt/k8s-data\"]}]}" ``` The config with the `"node": "172.18...."` is not updated yet. Can you describe your steps to update the nodePathMap? Ref: https://github.com/rancher/local-path-provisioner#reloading

> connect: operation not permitted Ths is more related to your network configuration.

@aeltorio Sorry, I missed this issue. Is the issue resolved? If not, you can check https://longhorn.io/kb/troubleshooting-manager-stuck-in-crash-loop-state-due-to-inaccessible-webhook/ first.

> I tried ReadWriteMany longhorn volume as well, but I found that the share-manager pod also fails to failover after the node where the share-manager is located is dropped. Doesn't...

@james-munson Could you take a look? The failover of share-manager does not work in the case.