dysk icon indicating copy to clipboard operation
dysk copied to clipboard

Dysk Planning

Open khenidak opened this issue 7 years ago • 8 comments

Use this issue to track beta/stable functions + features. BETA Goals

  • [x] Stable worker, specifically how it yields the CPU
  • [x] Fix issue #1 Specifically memory + allocation optimization
  • [x] Fix issue #7
  • [x] Fix issue #8
  • [x] Create easy to use samples
  • [x] Create easy to use verification scripts
  • [x] ~~return -EBUSY upon module unload when dysk has disks mounted~~ Kernel 4.10 does not count references correctly, hence users shouldn't be unload dysk module
  • [x] Create Releases (a container for CLI + consistent release experience for the module)
  • [x] Full e2e integration with Kubernetes
    • [x] Flex vol driver
      • [x] Support for rw mounting (with auto-lease and --break-lease flag)
      • [x] Support for ro mounting (with lease management across multiple nodes)
    • [x] Flex vol driver installer as daemon set (including kernel module installer + cli)
    • [x] Cli support for converting dysk to K8S PV example dyskcli convert-pv --labels [dictionary] --secret-ref The command converts stdin output of dyskctl get -o json or cat ./dysk.json or dyskctl create .. then creates k8s PV object which user can then use kubectl create to provision on k8s. Users should not suffer through manual conversion. Example
dyskctl create ... -o json | dyskctl convert-pv --name my-name --secret-ref my-secret --labels [dictonary] ##optionally --readonly --fstype "ext4## | kubectl create ...

##output piped to kubectl create stdin is:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: **my-name**
  lables: 
     **value of --labels dictionary**
spec:
flexVolume:
      driver: "dysk/dysk"
      fsType: **value of --fstype***
      readOnly: **if --read-only was passed to convert-pv**
      secretRef:
        name: **my-secret**
      options:
        accountName: "{Account-Name-Here-FROM-DYSK}"
        pageBlobPath: "{PAGE BLOB PATH- FROM - DYSK}"
  • [x] Samples
  • [ ] (STRETCH) Support for cache using dm-cache Stable Goals
  • [x] Move to Azure Storage SAS for authentication.
  • [ ] Flexvol support for raid configuration (multi dysks) via LVM

@ritazh @andyzhangx

khenidak avatar Jan 09 '18 15:01 khenidak

@khenidak Have you ever run https://github.com/khenidak/dysk/blob/master/kubernetes/dysk as a flexvolume driver, there are lots of bugs, it even cannot be loaded. And pls note that bash env in kubelet image is different than your dev env, there are some bash syntax not supported in kubelet bash.

andyzhangx avatar Jan 19 '18 09:01 andyzhangx

hmm Ok i will go back and check. in all cases thank you for putting it back on track.

khenidak avatar Jan 19 '18 15:01 khenidak

@khenidak, I have a PR to fix it all.

andyzhangx avatar Jan 19 '18 15:01 andyzhangx

the commit on head was not the latest code i had :-( Thanks again

khenidak avatar Jan 19 '18 15:01 khenidak

that's bad, you need to merge... Pls add me as reviewer if you want to change flex vol driver. I am quite familiar with flex vol driver development, you may propose design and requirement, I could do the development.

andyzhangx avatar Jan 19 '18 16:01 andyzhangx

No need to, you have put back everything back on track (basically took care of a mess i created :-)) thanks again.

khenidak avatar Jan 19 '18 17:01 khenidak

I'm wondering if dysk is the solution to all my Azure Disk problems with Kubernetes on Azure (wishful thinking I know). As far as I can tell from this issue, almost everything in the checklist above, except raid+lvm config that is a blocker for beta is done -- is raid+lvm really a blocker for beta? Can this be moved forward? I've seen no commits in these repo for two months.

rocketraman avatar Dec 09 '18 19:12 rocketraman

@rocketraman Your issue should be related to this bug: https://github.com/andyzhangx/demo/blob/master/issues/azuredisk-issues.md#14-azure-disk-attachdetach-failure-mount-issue-io-error, it's due to the dirty vm cache which would lead to lots of strange disk attach/detach issues. The fix has been verified in more than 150 k8s clusters by one customer, no disk attach/detach issue any more. I would encouge you to use 1.11.6, 1.12.4 or 1.13.0 when these versions come out.

andyzhangx avatar Dec 10 '18 02:12 andyzhangx