zfs-localpv icon indicating copy to clipboard operation
zfs-localpv copied to clipboard

[WIP] make zfs provisioner set sharenfs property

Open cruwe opened this issue 5 years ago • 23 comments

Why is this PR required? What issue does it fix?:

With issue #69 , sharing via NFS has been requested. Setting the sharenfs property is easy. However, to actually use the property on a system with the nfs-kernel-server, the dataset needs to be mounted and probably shared calling zfs share -a.

When invoked locally on a shell, it works. From the provisioner, it doesn't.

What this PR does?:

See above

Does this PR require any upgrade changes?:

I have not tested intensively, I believe it doesn"t.

If the changes in this PR are manually verified, list down the scenarios covered::

  • create new dataset with sharenfs set
  • change dataset's sharenfs properties
  • destroy dataset

Any additional information for your reviewer? : The PR is not yet ready to be merged. I kindly request help on how to properly mount the dataset on the providing machine, so that it may be exported via NFS.

In the second step, I would probably need assistance on how mount the dataset via nfs without fighting the affinity scheduler.

In both cases, I think pointing out the interface would be sufficient :-)

In any case, thank you very much for developing the zfs local provisioner,

Checklist:

  • [ ] Fixes #
  • [ ] PR Title follows the convention of <type>(<scope>): <subject>
  • [ ] Has the change log section been updated?
  • [ ] Commit has unit tests
  • [ ] Commit has integration tests
  • [ ] (Optional) Are upgrade changes included in this PR? If not, mention the issue/PR to track:
  • [ ] (Optional) If documentation changes are required, which issue on https://github.com/openebs/openebs-docs is used to track them:

PLEASE REMOVE BELOW INFORMATION BEFORE SUBMITTING (which I would love to do when it is ready, which it is not - by far)

The PR title message must follow convention: <type>(<scope>): <subject>.

Where:

  • type is defining if release will be triggering after merging submitted changes, details in CONTRIBUTING.md. Most common types are:

    • feat - for new features, not a new feature for build script
    • fix - for bug fixes or improvements, not a fix for build script
    • chore - changes not related to production code
    • docs - changes related to documentation
    • style - formatting, missing semi colons, linting fix etc; no significant production code changes
    • test - adding missing tests, refactoring tests; no production code change
    • refactor - refactoring production code, eg. renaming a variable or function name, there should not be any significant production code changes
  • scope is a single word that best describes where the changes fit. Most common scopes are like:

    • data engine (localpv, jiva, cstor)
    • feature (provisioning, backup, restore, exporter)
    • code component (api, webhook, cast, upgrade)
    • test (tests, bdd)
    • chores (version, build, log, travis)
  • subject is a single line brief description of the changes made in the pull request.

cruwe avatar May 30 '20 17:05 cruwe

@cruwe This PR is great!! Thanks for working on it. You might want to look at the creation of ZFSVolume CRDs here :- https://github.com/openebs/zfs-localpv/blob/42ed7d85ee183249c5e6d2fedc482c988651f75d/pkg/driver/controller.go#L75

This will pick the paramaters from the storageclass and add them to the ZFSVolume CR.

Regarding the mounting, the Driver mounts the dataset in /var/lib/kubelet path which is the path where kubelet asks the driver to mount the volumes so that it is visible to the application https://github.com/openebs/zfs-localpv/blob/42ed7d85ee183249c5e6d2fedc482c988651f75d/deploy/yamls/zfs-driver.yaml#L841

pawanpraka1 avatar May 31 '20 09:05 pawanpraka1

@cruwe you need to do two things here

  1. mount exportfs binary from the host inside the ZFS-Driver, for reference you can see how we are mounting zfs binary from the host https://github.com/openebs/zfs-localpv/blob/f5ae3ff476e8823c3c35a19221ef3f8bd4bb5016/deploy/zfs-operator.yaml#L1276
  2. Second thing is you also have to mount /var/lib/nfs directory inside the ZFS-Driver node daemonset. It is needed to do nfs stuff.

Once above two are done, you should not face any issue.

pawanpraka1 avatar Jun 09 '20 15:06 pawanpraka1

@cruwe one more thing, the dataset mountpoint should also be mounted from the host. By default, all the pod related mounts are happening in /var/lib/kubelet directory which driver mounts. So, if you are mounting the dataset to the different path, then you have to mount that path inside the daemonset. see here : https://github.com/openebs/zfs-localpv/blob/f5ae3ff476e8823c3c35a19221ef3f8bd4bb5016/deploy/zfs-operator.yaml#L1263

pawanpraka1 avatar Jun 09 '20 15:06 pawanpraka1

@pawanpraka1: thanks for the hint with the /var/lib/nfs. Tried writing into /etc/exportfs directly first and got that as an error-msg. Tried it with the node-ds until that worked and then did it with zfs share, which suceeded.

I implemented a dirty hack to temporarily disable your legacy-mountpoint - zfs dataset, sharenfs and legacy do not play together. That we need to reconcile:

Assuming a sharenfs dataset is only exported and not used by any local pods, it is possible to legacy mount with sharenfs off or unset. If it is desired to mount locally and sharenfs, we need to work with /etc/exports directly. I'd rather avoid that it that option is still open, because I ran into lots of issues when trying to tear that down before deletions.

In any case, thanks!

cruwe avatar Jun 09 '20 19:06 cruwe

The last commit lead to a working deployment, mounted and exported datasets, which may be mounted and written to. It's still dirty (PoC-style), needs lots of love clearing up and validating, and could use two or three lines of documentation. Let's talk about it later this week. Looking forward to hearing (reading) from you, cheers!

cruwe avatar Jun 09 '20 19:06 cruwe

Sorry for my radio-silence.

@pawanpraka1:

I'd suggest treating the concept of mountung a pvc as a regular zfs dataset for some pod and exporting it at the same time as a special case, which may be implemented later, but is not necessary for completion of the task at this point.

Then, an NFS-export may be declared by either setting the sharenfs property or declaring a special fsType: nfs. (I'd favour the latter, because the former makes an implementation of the case 'mounted at pod and exported' harder.)

I'd then proceed introducing a disambiguation as in 'when nfs share, do mount with legacy != none irrespective of pod consumption, when ! nfs, proceed as normally'.

Having that, the pvc may be consumed by as an nfs pvc (less elegant, but quick).

Eventually, the mount-logic should be adapted replacing the mount -t zfs part with -t nfs when fsType: nfs or equivalent. My understanding of the mount logic is still sparse, but I believe then, a pod with corresponding pvc should be created on the same node which provides the volume.

As a last step, this host affinity (which I do not know much about) needs to be resolved conditionally, i.e., affinity on when zfs, off when nfs. I have no idea how to do that, but I figure I'd find out soon enough - might require some help, though.

How does this sound to you?

cruwe avatar Jun 17 '20 07:06 cruwe

@cruwe Sorry for the delay, Was busy with the release activity and some bugs in k8s.

We need to do few things here

  1. Don't set the affinity on the PV. We should avoid this step while creating the volume. I want nfs volume to be mounted by the pod running on different nodes, so for nfs volumes, we should not set the affinity on the PV. Please note that we still have to run scheduler to find out where I have to create the volume but we don't need to set the affinity for the PV.

https://github.com/openebs/zfs-localpv/blob/master/pkg/driver/controller.go#L237

  1. In a lot of situations, you would not be happy to give every host on your network access to your NFS. You probably only want to give one specific host, or a specific group of hosts, access. So we still need to do something like sharenfs="[email protected]/24".

  2. We need to dedicate one directory /var/tmp/nfs on the host machine, where we can setup the volume mounts. the volumes will be mounted at /var/tmp/nfs// path. And the pods using this volume will use this path while mounting it locally. this will be part of volume creation. While creating the volume, if sharenfs is set, we will switch the mountpoint to /var/tmp/nfs// from legacy

https://github.com/openebs/zfs-localpv/blob/ab32a5a4267b137896a247bf648f0f77fc3e7c62/pkg/zfs/zfs_util.go#L239

  1. For nfs volumes, we have to by-pass the "Wall of north" which does not allow the mounting of the device to more than one path. It is there to prevent the volumes getting mounted by more than one pod.

https://github.com/openebs/zfs-localpv/blob/ab32a5a4267b137896a247bf648f0f77fc3e7c62/pkg/zfs/mount.go#L151

pawanpraka1 avatar Jun 17 '20 09:06 pawanpraka1

Hi ... don't apologize for any delays. As you might notice, I am sluggish atm myself ...

Regarding (1), I think I successfully unset the node topology, although my code is dirty at best. I'll look into that to clean that up.

The sharenfs restriction (2) to a network is already implemented, although optional from the beginning. Would you make that mandatory?

Regarding (3), I do not know if I read you correctly. mountpoint=legacy will not work with the zfsutils sharenfs magic. Then, it would be necessary to manage (and validate) the exportfs logic oneself, which is a daunting task.

Wall of North (4)? Did I miss something important in popular culture? In any case, the branch I just pushed contains logic to provide a ZFS share via NFS and allow a pod on a different node to mount that. I am still investigation an issue when ZFS datasets are not unshared and not destroyed. This issue prevents using further NFS mounts until cleaning up manually.

Furthermore, what I did to speed up the testing was to introduce a zfs-driver on nodes without zpools (dividing btw worker and storage nodes), which kind of works, but feels very ugly to me. I don't have a better idea, though.

In any case, thanks for helping me along. Looking forward to your comments, cheers!

cruwe avatar Jun 22 '20 19:06 cruwe

Regarding (1), I think I successfully unset the node topology, although my code is dirty at best. I'll look into that to clean that up.

Great!!

The sharenfs restriction (2) to a network is already implemented, although optional from the beginning. Would you make that mandatory?

It is not mandatory as sharenfs=on is still a valid configuration and if someone wants to put restriction on it, he can use sharenfs="[email protected]/24".

Regarding (3), I do not know if I read you correctly. mountpoint=legacy will not work with the zfsutils sharenfs magic. Then, it would be necessary to manage (and validate) the exportfs logic oneself, which is a daunting task.

So in (3), the real question is how do the applications mount the nfs volumes? we need to do "mount -t nfs zfs.host.com:/pool-name/dataset-name /path/to/local/mount", where /pool-name/dataset-name is the default mount point. For this to work, the driver has to mount the /pool-name directory, which is not static as pool-name can change. So have one directory /var/tmp/nfs where we will mount the datasets. Each volume will have mountpoint set to /var/tmp/nfs/pool-name/volname (since we can have more than one pools on a node). Let me know if you need more clarification on it.

Wall of North (4)? Did I miss something important in popular culture? Game of Thrones :)

In any case, the branch I just pushed contains logic to provide a ZFS share via NFS and allow a pod on a different node to mount that. I am still investigation an issue when ZFS datasets are not unshared and not destroyed. This issue prevents using further NFS mounts until cleaning up manually.

Let me know if you need help debugging this.

Furthermore, what I did to speed up the testing was to introduce a zfs-driver on nodes without zpools (dividing btw worker and storage nodes), which kind of works, but feels very ugly to me. I don't have a better idea, though.

Another way to test is have two or more application pods, which should be using the same pvc. Make sure the pods are running on different nodes. And write the data from one pod and verify it from the other pod.

pawanpraka1 avatar Jun 23 '20 07:06 pawanpraka1

Hi,

I got all functional aspects woring. Datasets get created and nfs-mounted. The data is shared btw multiple pods on multiple hosts and the storage host itself and datasets get "unshared" correctly before zfs destroy without manual intervention.

Regarding (3), I do not understand why a possible change of the pool's name would affect the mountpoint. As far as I understand, it is set on the host and the storageclass must (musn't it?) have the poolname propery set correspondingly? Then, wouldn't the poolname be known before volume provisioning and may be read from vol.Spec.Poolname? This is what I did with the mount -t nfs logic and it seemed to work.

In any case, I'm equally fine with setting it manually, I respectfully ask, however, the mount-root not to be /var/tmp, which I think should be reserved for temporary files. Could I somehow convince you to make it sth like /exports/openebs/<poolname>/<volname> per default and perhaps configurable from the parameters? That would allow those with dedicated storage only partially dedicated to k8s to differentiate and those with multiple zpools to separate concerns.

Regarding the testing, I tested with two pods on two nodes, one of them the storage provider and one of them an nfs-only consumer. I set watch find on the paths and touched files manually on both pods. Sharing worked fine between both nodes and the host.

I believe I saw a situation where file-propagation seemed to be stuck for some seconds. This is, I believe, impossible to rule out completely, especially when testing with two VirtualBoxen and flannel.

I am trying to debug a curios situation, where the lock-manager is non-functional yielding

time="2020-06-23T19:50:50Z" level=error msg="mount: could not mount the dataset 192.168.100.235:/tank/pvc-2bb3a01a-366e-4c4a-b766-4b635459dc15 cmd [-t nfs 192.168.100.235:/tank/pvc-2bb3a01a-366e-4c4a-b766-4b635459dc15 /var/lib/kubelet/pods/82664564-5fc8-473a-95e0-c2647c86fb79/volumes/kubernetes.io~csi/pvc-2bb3a01a-366e-4c4a-b766-4b635459dc15/mount] error: mount.nfs: rpc.statd is not running but is required for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or start statd.\n"
time="2020-06-23T19:50:50Z" level=error msg="GRPC error: rpc error: code = Internal desc = rpc error: code = Internal desc = dataset: mount failed err : mount.nfs: rpc.statd is not running but is required for remote locking.\nmount.nfs: Either use '-o nolock' to keep locks local, or start statd.\n

I do not know if that is caused by my flaky testing setup or if that is a "real" issue.

Lastly, the destroy volume logic leaves directories formerly used as mountpoints lying around. Formerly, this was a non-problem because the were set under the kubelet mount. Now, they either are at the zpool root-dataset's root or under whatever is chosen to be the export-root. What's your take on this? I'd suggest looking at the volumes reclaim-policy (btw, where do I get that if it is unset at the parameters hash?) and then doing a conditional rmdir.

Anyhow, I am reasonably happy and if you do not completely disagree, I'd start commenting, documenting and in general straitening and prettying that thing up.

Thank's for your help and your patience, cheers!

cruwe avatar Jun 23 '20 20:06 cruwe

I got all functional aspects woring. Datasets get created and nfs-mounted. The data is shared btw multiple pods on multiple hosts and the storage host itself and datasets get "unshared" correctly before zfs destroy without manual intervention

question: Are you unsharing it explicitely before destroy? Doesn't zfs destroy unshare (and umount also) before deleting the datatset?

Regarding (3), I do not understand why a possible change of the pool's name would affect the mountpoint. As far as I understand, it is set on the host and the storageclass must (musn't it?) have the poolname propery set correspondingly? Then, wouldn't the poolname be known before volume provisioning and may be read from vol.Spec.Poolname? This is what I did with the mount -t nfs logic and it seemed to work.

hmmm, what if I have two storageclasses, one for pool1 and another for pool2. how do we handle that, sorry for going back and forth on this.

I am trying to debug a curios situation, where the lock-manager is non-functional yielding

check if nfs utils are installed whre pods are running. Will check the code.

Lastly, the destroy volume logic leaves directories formerly used as mountpoints lying around. Formerly, this was a non-problem because the were set under the kubelet mount. Now, they either are at the zpool root-dataset's root or under whatever is chosen to be the export-root. What's your take on this? I'd suggest looking at the volumes reclaim-policy (btw, where do I get that if it is unset at the parameters hash?) and then doing a conditional rmdir.

we can cleanup that at the destroy time : https://github.com/openebs/zfs-localpv/blob/2b13a04db427c752648486858ca09daba4b79271/pkg/zfs/zfs_util.go#L505

pawanpraka1 avatar Jun 24 '20 13:06 pawanpraka1

question: Are you unsharing it explicitely before destroy? Doesn't zfs destroy unshare (and umount also) before deleting the datatset?

Yes and yes. I unshare explicitly which usually would not be necessary, because zfs destroy is supposed to and usually does unshare.

When called from inside the container. it doesn't, though, or perhaps it tries and doesn't succeed. What happens is that the dataset is allegedly busy, so neither unmounting nor destroying is possible.

I tried numerous attempts, found out that restarting the nfs-kernel-server on the host will free the dataset, did an execsnoop from the bpfcc-tools suite to find out which processes are forked when restarting the nfs-kernel-server and boiled that set of five or six down to the minimal number of exec-calls from the inside which free the dataset so that it may be destroyed. That is what I then replicated in the UnshareZFSDataset func.

I still could not drill down to the real cause and all my acquaintances knowledgeable about ZFS are Solaris guys and a mail to zfsdiscuss did not bear any fruit so far.

hmmm, what if I have two storageclasses, one for pool1 and another for pool2. how do we handle that, sorry for going back and forth on this.

Without legacy mount, the mount-points then would be just the standard ZFS /pool1/pcv-... and /pool2/pvc-... which is <vol.Spec.Poolname>/<vol.Name>. With legacy, one could do /exports/openebs/<vol.Spec.Poolname>/<vol.Name> for better structure on the host or, as I believe the pvc-names to be unique, scrap the <vol.Spec.Poolname>. Unless I am missing something. Do I?

check if nfs utils are installed whre pods are running. Will check the code.

I believe they are, but I will double-check and perhaps trace when and preferably why the lock-manager stops working.

we can cleanup that at the destroy time :

I'll implement that, but possibly not before Friday.

Cheers!

cruwe avatar Jun 25 '20 09:06 cruwe

@pawanpraka1: I very much fear I got something completely wrong here:

Don't set the affinity on the PV. We should avoid this step while creating the volume. I want nfs volume to be mounted by the pod running on different nodes, so for nfs volumes, we should not set the affinity on the PV. Please note that we still have to run scheduler to find out where I have to create the volume but we don't need to set the affinity for the PV. https://github.com/openebs/zfs-localpv/blob/master/pkg/driver/controller.go#L237

Initially, I thought topology to govern what constraints a node must satisfy to mount a volume, wich is sort of correct, I beleive. What I did not understand is, that it also governs what constraints a node must satisfy to create the zvol. I started buld-testing today and then, I learned that the system tried to create the zvols on nodes without any zpools - I divide btw storage- and worker-nodes. Of course, this fails.

I monkey-patched the scheduler to stop scheduling and accept the decision taken when ownerNodeId is set at the storageclass (labels probably would have been better, but it is meant as a stop-gap measure). Then, of course, only pods on that storage nodes may be scheduled. which thwarts the idea.

How would you solve the problem that a volume may be created on specific nodes only, but mounted from everywhere / inside certain network segments / etc?

Thanks and cheers!

cruwe avatar Jun 26 '20 16:06 cruwe

How would you solve the problem that a volume may be created on specific nodes only, but mounted from everywhere / inside certain network segments / etc?

@cruwe You can label the storage nodes something like : openebs.io/node=storage. And then use this label in zfs-operator.yaml to deploy the daemonset. This way driver will run on storage nodes only and will create the volumes there only. Now you don't need to put any topology in the storageclass as kubernetes scheduler will schedule the pods there only, so avoid doing this. Let me know if it makes sense.

pawanpraka1 avatar Jun 26 '20 17:06 pawanpraka1

@pawanpraka1 : I feel we might be talking about different aims here.

My idea was to have nodes in a cluster which provide persistent storage and different nodes, which don't, but only consume that storage. What I am trying to achieve is to have the scheduler put the zfs dataset one one node (labelled as you suggest, for instance) and have a pod of openebs-zfs-node mount the volume via nfs on a different node, which does'nt have zpools.

When the openebs-zfs-node may be scheduled only on nodes with zpools present, it wouldn't be possible to use the driver to schedule mounts for nfs consumers on zpool-less nodes. This is why I looked for ways for the scheduler to respect the affinity constraint for the creation of a ZFS dataset and to disregard it when it is of fstype=nfs and mounting it.

I hope what I try to achive does not run completely against your ideas of the architecture of your software.

cruwe avatar Jun 27 '20 11:06 cruwe

I looked at the scheduler code (https://github.com/openebs/zfs-localpv/pull/139/files#diff-61566437d9b17d18936960aa0bce1ad4R245) and it seems like we are setting the topology in the createvolume response. That will make all the application pod to be scheduled on the same node only which is what we don't want. We should avoid setting the topology for the volume. See this comment :-

Don't set the affinity on the PV. We should avoid this step while creating the volume. I want nfs volume to be mounted by the pod running on different nodes, so for nfs volumes, we should not set the affinity on the PV. Please note that we still have to run scheduler to find out where I have to create the volume but we don't need to set the affinity for the PV. https://github.com/openebs/zfs-localpv/blob/master/pkg/driver/controller.go#L237

Coming to the testing part, we can let the scheduler to schedule the volume on any node(but don't set the affinity), then we can use the pod affinity in application deployment to deploy the application to node where volume is not present. here, note that the application pod can be scheduled to the node where node daemonset are running as we need it to mount the volume.

pawanpraka1 avatar Jun 29 '20 06:06 pawanpraka1

Hi,

thanks again for your help. I am sorry if I am "difficult", I do not mean to and I fear I have been a bit dense understanding your intention regarding topology and placement.

Do I read you that the NewCreateVolumeResponseBuilder() should drop the topology-constraint when the volume is of nfs-type? When it is of type=zfs, I believe it cannot, because mounting then requires same-host semantics.

To have ZFS datasets provisioned only where zpools exist, you suggest to use allowedTopologies and labels, correct? That part took me very long to understand, sorry to have demanded so much of your patience here.

In any case, if I hope to have that correct now so that I can start debugging the rpc-statd I observed in earnest.

Thanks and cheers!

cruwe avatar Jun 29 '20 19:06 cruwe

@cruwe the issue with setting topology(which will set nodeSelectorTerms in PV) in volume is that all the pods using that volume will be scheduled to the same node. You can check kubectl get pv -oyaml it will look something like this :-

apiVersion: v1
kind: PersistentVolume
metadata:
  name: fio-vol-pv # some unique name
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 4Gi
  claimRef:
    apiVersion: v1
    kind: PersistentVolumeClaim
    name: fio-vol-pvc
    namespace: default # namespace for the pvc 
  csi:
    driver: zfs.csi.openebs.io
    fsType: zfs 
    volumeAttributes:
      openebs.io/poolname: zfspv-pool # change the pool name accordingly
    volumeHandle: fio-vol # This should be same as the zfs volume name
  nodeAffinity:
    required:
      nodeSelectorTerms:
      - matchExpressions:
        - key: openebs.io/nodename
          operator: In
          values:
          - pawan-3
  persistentVolumeReclaimPolicy: Delete
  storageClassName: openebs-zfspv
  volumeMode: Filesystem

Now when POD uses this volume, the kubernetes scheduler will check if there is any nodeSelectorTerms in the PV, and try to schedule the pod matching with that term.

The above is good for Local PV solution but with NFS as it is over the Network, it is not needed for the pod to be scheduled on the same node. The volume can be created on the one node and the pod can use this volume from different node via mount -t nfs volumenode:/pool-name/dataset-name /path/to/local/mount.

For example, let's say we have two nodes ; node1 and node2. I have created a storageclass and pvc like below

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: openebs-zfspv
allowVolumeExpansion: true
parameters:
  poolname: "zfspv-pool"
  fstype: "zfs"
provisioner: zfs.csi.openebs.io
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: csi-zfspv
spec:
  storageClassName: openebs-zfspv
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 4Gi 

Now after applying the aboe yaml, the scheduler decided to put the volume on node1. The volume will be created on the pool zfspv-pool that is there on node1. Now if I am deploying the application using above pvc and kubernets scheduler decides to put the pod on node2. The daemonset agent running on node2 just have to check if it is a nfs volume, then mount the volume using mount -t nfs volumenode:/pool-name/dataset-name /target/path command. Here note that since the volume is not present on node2, we can not fire any zfs set/get command which we are doing in case local volume, where pod will always come to the same node. In NFS since it is not necessary that pod will come to the same node where volume is present and pod can be running anywhere and can mount the volume.

I am so sorry for not describing it in the details. Let me know if it makes sense to you. Also, if you want, you can join openebs-dev and openebs channel to discuss anything

  1. https://kubernetes.slack.com/messages/openebs/
  2. https://kubernetes.slack.com/messages/openebs-dev/

pawanpraka1 avatar Jun 30 '20 06:06 pawanpraka1

Hi,

thank you for your detailed explanation. Please do not be sorry, I believe I learned more this way.

I believe I understood what you are telling me and implemented that not setting the WithTopology(topology) when creating zfs volumes and configuring the storageclass to only allow topologies with matching label. When I deploy, it now works as I expect. Perhaps you could have a brief look and I start diving into writing tests next week.

I just joined the slack channels and will commit a (possible dynamic) slice of an eye, Thanks very much for your kind help and your patience. Cheers!

cruwe avatar Jul 03 '20 10:07 cruwe

@cruwe I will take a look at the PR. Can you resolve the conflicts and update it? Also DCO sign(https://github.com/openebs/zfs-localpv/blob/master/CONTRIBUTING.md#sign-your-work) this PR?

pawanpraka1 avatar Jul 03 '20 13:07 pawanpraka1

Re-based on master and squashed to improve legibility. Thank your for your patience!

cruwe avatar Jul 06 '20 15:07 cruwe

@cruwe let me know if you are blocked on this? I see the PR is still in draft state.

pawanpraka1 avatar Jul 27 '20 11:07 pawanpraka1

Hi Pawan ... I am so sorry, I am loaded in work at the moment and will be very sluggish to reply for at least the next two weeks. I'll try to respond properly later the week, but I cannot promise anything without blatantly lying. Please excuse. Cheers!

cruwe avatar Aug 03 '20 15:08 cruwe

closing this PR as it has been stale for long . @cruwe please raise another PR for this change

sinhaashish avatar Aug 16 '23 01:08 sinhaashish