zfs-localpv icon indicating copy to clipboard operation
zfs-localpv copied to clipboard

Import existing dataset

Open jr0dd opened this issue 3 years ago • 16 comments

The docs are not very clear on getting this working. I was originally just using hostpath pv/pvc's, but figured I might as well take full advantage of your CSI for my pre-existing datasets. From what I gather it doesn't like me using a child dataset for the volume.

zfs-node logs:

I0716 11:50:36.669732       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/23d412fb-829a-4167-bb60-b3b2414e13f3/volumes/kubernetes.io~csi/heimdall/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"zfs"}},"access_mode":{"mode":1}},"volume_context":{"openebs.io/poolname":"deadpool"},"volume_id":"containous/datastore/heimdall"}
E0716 11:50:36.670707       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = invalid resource name "containous/datastore/heimdall": [may not contain '/']
apiVersion: v1
kind: PersistentVolume
metadata:
  name: heimdall
spec:
  capacity:
    storage: 100Mi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: openebs-zfspv
  csi:
    driver: zfs.csi.openebs.io
    fsType: zfs
    volumeAttributes:
      openebs.io/poolname: deadpool
    volumeHandle: containous/datastore/heimdall
apiVersion: zfs.openebs.io/v1
kind: ZFSVolume
metadata:
  name: heimdall
  namespace: home
spec:
  capacity: "104857600"
  fsType: zfs
  ownerNodeID: ix-truenas
  poolName: deadpool
  volumeType: DATASET
status:
  state: Ready
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: heimdall
  namespace: home
spec:
  storageClassName: openebs-zfspv
  volumeName: heimdall
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

jr0dd avatar Jul 16 '21 12:07 jr0dd

@jr0dd can you share the ZFSVolume CR also?

poolname parameter can be treated as parent dataset. So you don't need to add dataset path in the volume handle. You need to do this for PV

apiVersion: v1
kind: PersistentVolume
metadata:
  name: heimdall
spec:
  capacity:
    storage: 100Mi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: openebs-zfspv
  csi:
    driver: zfs.csi.openebs.io
    fsType: zfs
    volumeAttributes:
      openebs.io/poolname: containous/datastore/deadpool
    volumeHandle: heimdall

And you need to create the ZFSVolume CR with poolName parameter as containous/datastore/deadpool.

pawanpraka1 avatar Jul 16 '21 13:07 pawanpraka1

@pawanpraka1 Oops, copy & paste fail. I updated my initial post with the ZFSVolume. I will try adjusting my pv's later today and try again.

jr0dd avatar Jul 16 '21 14:07 jr0dd

Well I tried again and the pod just stays creating because zfspv can't find the dataset. I did do a zfs umount deadpool/containous/datastore/heimdall before applying the yaml's.

I0716 16:05:31.491655       1 zfsnode.go:114] zfs node controller: updated node object openebs/ix-truenas
I0716 16:05:31.491920       1 zfsnode.go:139] Got update event for zfs node openebs/ix-truenas
I0716 16:05:58.162818       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/0a20b995-75c0-4786-b21f-a3b927f88537/volumes/kubernetes.io~csi/heimdall/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"zfs"}},"access_mode":{"mode":1}},"volume_context":{"openebs.io/poolname":"deadpool/containous/datastore"},"volume_id":"heimdall"}
E0716 16:05:58.195141       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = zfsvolumes.zfs.openebs.io "heimdall" not found
I0716 16:06:31.509065       1 zfsnode.go:100] zfs node controller: node pools updated current=[{Name:boot-pool UUID:6825819564965185904 Free:{i:{value:232891412480 scale:0} d:{Dec:<nil>} s:227433020Ki Format:BinarySI}} {Name:deadpool UUID:13627033455110728882 Free:{i:{value:6594319317104 scale:0} d:{Dec:<nil>} s:6594319317104 Format:DecimalSI}} {Name:k8s UUID:14052123548571712969 Free:{i:{value:229095768064 scale:0} d:{Dec:<nil>} s: Format:BinarySI}}], required=[{Name:boot-pool UUID:6825819564965185904 Free:{i:{value:232891293696 scale:0} d:{Dec:<nil>} s: Format:BinarySI}} {Name:deadpool UUID:13627033455110728882 Free:{i:{value:6594319317104 scale:0} d:{Dec:<nil>} s: Format:BinarySI}} {Name:k8s UUID:14052123548571712969 Free:{i:{value:229095702528 scale:0} d:{Dec:<nil>} s: Format:BinarySI}}]
apiVersion: zfs.openebs.io/v1
kind: ZFSVolume
metadata:
  name: heimdall
  namespace: home
spec:
  capacity: "104857600"
  fsType: zfs
  shared: "yes"
  ownerNodeID: ix-truenas
  poolName: deadpool/containous/datastore
  volumeType: DATASET
status:
  state: Ready
apiVersion: v1
kind: PersistentVolume
metadata:
  name: heimdall
spec:
  capacity:
    storage: 100Mi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: openebs-zfspv
  csi:
    driver: zfs.csi.openebs.io
    fsType: zfs
    volumeAttributes:
      openebs.io/poolname: deadpool/containous/datastore
    volumeHandle: heimdall
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: heimdall
  namespace: home
spec:
  storageClassName: openebs-zfspv
  volumeName: heimdall
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

jr0dd avatar Jul 16 '21 16:07 jr0dd

@jr0dd you have to create the ZFSVolume cr in the openebs namespace.

pawanpraka1 avatar Jul 16 '21 16:07 pawanpraka1

@pawanpraka1 ok making a little progress.

748-a65f-df0ca2e46523/volumes/kubernetes.io~csi/heimdall/mount': failed to create mountpoint: Read-only file system
I0716 17:20:40.959483       1 grpc.go:72] GRPC call: /csi.v1.Node/NodePublishVolume requests {"target_path":"/var/lib/kubelet/pods/292a4f71-d0aa-4748-a65f-df0ca2e46523/volumes/kubernetes.io~csi/heimdall/mount","volume_capability":{"AccessType":{"Mount":{"fs_type":"zfs"}},"access_mode":{"mode":1}},"volume_context":{"openebs.io/poolname":"deadpool/containous/datastore"},"volume_id":"heimdall"}
E0716 17:20:41.328123       1 zfs_util.go:533] zfs: could not mount the dataset deadpool/containous/datastore/heimdall cmd [mount deadpool/containous/datastore/heimdall] error: cannot mount '/mnt/var/lib/kubelet/pods/292a4f71-d0aa-4748-a65f-df0ca2e46523/volumes/kubernetes.io~csi/heimdall/mount': failed to create mountpoint: Read-only file system
E0716 17:20:41.328152       1 grpc.go:79] GRPC error: rpc error: code = Internal desc = rpc error: code = Internal desc = zfs: mount failed err : not able to mount, cannot mount '/mnt/var/lib/kubelet/pods/292a4f71-d0aa-4748-a65f-df0ca2e46523/volumes/kubernetes.io~csi/heimdall/mount': failed to create mountpoint: Read-only file system

jr0dd avatar Jul 16 '21 17:07 jr0dd

Hmmmm, it seems like for your k8s environment the kubelet directory is /mnt/var/lib/kubelet. Can you update the Operator yaml and replace /var/lib/kubelet to /mnt/var/lib/kubelet at all the places and apply it.

btw, which k8s flavour you are using?

pawanpraka1 avatar Jul 17 '21 08:07 pawanpraka1

I'm using truenas scale which has k3s pre-loaded. I'm just going to stick with hostpath pv/pvc's on this node. Datasets that contain media are shared over samba to local computers as well as mounted in some containers, so losing that access is not ideal.

jr0dd avatar Jul 20 '21 22:07 jr0dd

@jr0dd sure.

pawanpraka1 avatar Jul 22 '21 05:07 pawanpraka1

Hmmmm, it seems like for your k8s environment the kubelet directory is /mnt/var/lib/kubelet. Can you update the Operator yaml and replace /var/lib/kubelet to /mnt/var/lib/kubelet at all the places and apply it.

btw, which k8s flavour you are using?

Because in truenas scale, the altroot property of zpool is "/mnt", we need bind mount /var/lib/kubelet to /mnt/var/lib/kubelet to workaround.

Add ExecStartPre /usr/bin/mount -o bind /var/lib/kubelet /mnt/var/lib/kubelet to /lib/systemd/system/k3s.service, then change mountPropagation to Bidirectional for host-root in daemonset.

the config of /lib/systemd/system/k3s.service:

[Unit]
Description=Lightweight Kubernetes
Documentation=https://k3s.io
Wants=network-online.target

[Install]
WantedBy=multi-user.target

[Service]
Type=notify
KillMode=process
Delegate=yes
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=1048576
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
TimeoutStartSec=0
Restart=always
RestartSec=5s
ExecStartPre=-/sbin/modprobe br_netfilter
ExecStartPre=-/sbin/modprobe overlay
ExecStartPre=-/usr/bin/mount -o bind /var/lib/kubelet /mnt/var/lib/kubelet
ExecStart=/usr/local/bin/k3s \
    server \
        '--flannel-backend=none' \
        '--disable=traefik,metrics-server,local-storage' \
        '--disable-kube-proxy' \
        '--disable-network-policy' \
        '--disable-cloud-controller' \
        '--node-name=ix-truenas' \
        '--docker' \

daemonset of zfs plugin:

        - mountPath: /host
          mountPropagation: Bidirectional
          name: host-root
          readOnly: true

jim3ma avatar Sep 09 '21 05:09 jim3ma

Great @jim3ma.

Because in truenas scale, the altroot property of zpool is "/mnt", we need bind mount /var/lib/kubelet to /mnt/var/lib/kubelet to workaround.

Have you tried replacing /var/lib/kubelet with /mnt/var/lib/kubelet in the operator yaml and then apply it?

pawanpraka1 avatar Sep 09 '21 06:09 pawanpraka1

Great @jim3ma.

Because in truenas scale, the altroot property of zpool is "/mnt", we need bind mount /var/lib/kubelet to /mnt/var/lib/kubelet to workaround.

Have you tried replacing /var/lib/kubelet with /mnt/var/lib/kubelet in the operator yaml and then apply it?

I have just bind mount it to /mnt/var/lib/kubelet, all things is ok.

ps: I also change the host-root mountPropagation to Bidirectional, and disable k3s addon zfs-operator.

jim3ma avatar Sep 09 '21 09:09 jim3ma

Great @jim3ma.

Because in truenas scale, the altroot property of zpool is "/mnt", we need bind mount /var/lib/kubelet to /mnt/var/lib/kubelet to workaround.

Have you tried replacing /var/lib/kubelet with /mnt/var/lib/kubelet in the operator yaml and then apply it?

I have just bind mount it to /mnt/var/lib/kubelet, all things is ok.

ps: I also change the host-root mountPropagation to Bidirectional, and disable k3s addon zfs-operator.

How exactly are you disabling the zfs-operator? I would much rather use the helm chart in SCALE.

jr0dd avatar Sep 13 '21 20:09 jr0dd

Great @jim3ma.

Because in truenas scale, the altroot property of zpool is "/mnt", we need bind mount /var/lib/kubelet to /mnt/var/lib/kubelet to workaround.

Have you tried replacing /var/lib/kubelet with /mnt/var/lib/kubelet in the operator yaml and then apply it?

I have just bind mount it to /mnt/var/lib/kubelet, all things is ok. ps: I also change the host-root mountPropagation to Bidirectional, and disable k3s addon zfs-operator.

How exactly are you disabling the zfs-operator? I would much rather use the helm chart in SCALE.

@jr0dd touch a empty file: /mnt/data/ix-applications/k3s/server/manifests/zfs-operator.yaml.skip, then you can control the zfs-operator manually.

/mnt/data is the zpool k3s used.

jim3ma avatar Sep 14 '21 00:09 jim3ma

p

I tried that before and it didn’t work a few weeks back. I’ll try again. Thanks!

jr0dd avatar Sep 14 '21 02:09 jr0dd

I just had to do this on a bare Ubuntu machine with k3s and adopting existing datasets works as expected. I had to set the datasets mountpoint to legacy beforehand. Otherwise, the mount will be empty (zfs set mountpoint=legacy my-pool/my-dataset).

apiVersion: zfs.openebs.io/v1
kind: ZFSVolume
metadata:
  name: my-dataset
  namespace: openebs
spec:
  capacity: "200Gi"
  fsType: zfs
  shared: "yes"
  ownerNodeID: my-node-name
  poolName: my-pool
  volumeType: DATASET
status:
  state: Ready
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: my-dataset
spec:
  capacity:
    storage: 200Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: openebs-zfspv
  csi:
    driver: zfs.csi.openebs.io
    fsType: zfs
    volumeAttributes:
      openebs.io/poolname: my-pool
    volumeHandle: my-dataset
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-dataset
spec:
  storageClassName: openebs-zfspv
  volumeName: my-dataset
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 200Gi

HenningCash avatar Nov 30 '21 19:11 HenningCash