Andrei Kvapil
Andrei Kvapil
Hey, sorry for long answer, try to create new pod template with script for find and removing SCSI reservation. And specify it in `fencing/after-hook` annotation for your fencing pod template.
@rck sorry, I didn't see that doc, but unfortunately this is not working for me anyway: ```yaml allowVolumeExpansion: true apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: linstor-data-r2 parameters: linstor.csi.linbit.com/placementCount: "2" linstor.csi.linbit.com/storagePool:...
> I'd prefer it if the user had to explicitly opt-in (maybe through a new enable-live-migration parameter on the storage class?). Then, on volume attach, if CSI can reasonably think...
> Have you checked the actual drbd resource (`drbdsetup show --show-defaults`)? Because whatever you set in the storage class is put on the resource group and inherited by the resource...
Hi, take a look at [nfs-server-provisioner](https://github.com/kubernetes-sigs/nfs-ganesha-server-and-external-provisioner/) project, we're using it together with linstor for having ReadWriteMany volumes. Failover is ensured by [kube-fencing](https://github.com/kvaps/kube-fencing).
> Are NFS performances good enough for a heavy loaded web app? Performance is fine, you can see my benchmarks for comparison: https://gist.github.com/kvaps/8c2831ca15cf161a11e62cc4276793f7 > Do you have a working example...
Well we're using same approach to host Nextcloud itself, but the user data is stored on another server using S3 backend
Sounds like network problem May I get know, what CNI and which mode of kube-proxy you're using?
I was testing it with cilium in kube-proxy free mode, it was working fine in my cluster. Do you use kube-proxy?
No it was working to me without any additional removings