zfs-ha icon indicating copy to clipboard operation
zfs-ha copied to clipboard

Spread ZFS Pools across nodes

Open rbicelli opened this issue 3 years ago • 3 comments

Hi, could I create various group resources (group-vol1, group-vol2, group-vol3, gruop-vol4) and spread them across cluster nodes?

rbicelli avatar Apr 03 '21 19:04 rbicelli

Yes, you can have different pools assigned to different cluster nodes. A common set up with two hosts is to run one pool and host one, and another pool on host two. It’s a way of leveraging all the resources available and having reasonable failover in the event of a node’s unavailability.

Edmund White

On Apr 3, 2021, at 2:44 PM, Riccardo @.***> wrote:



Hi, could I create various group resources (group-vol1, group-vol2, group-vol3, gruop-vol4) and spread them across cluster nodes?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/ewwhite/zfs-ha/issues/37, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABJSFNX2WY7BIIBCEYVKAQ3TG5VYZANCNFSM42KSCA3Q.

ewwhite avatar Apr 03 '21 19:04 ewwhite

And could it work using 2 SAS controllers per host connecting each to different chains? Like Controller 1 Host 1 -> Enclosure Chain 1 Controller 1 Host 2 -> Enclosure Chain 1 Controller 2 Host 1 -> Enclosure Chain 2 Controller 2 Host 2 -> Enclosure Chain 2

each node serving 2 volumes per chain?

rbicelli avatar Apr 15 '21 16:04 rbicelli

In Kubernetes environments, one can also leverage OpenEBS cStor for replicated pools across multiple nodes:

  • https://github.com/mayadata-io/cstor/wiki/Using-uZFS-for-storing-cStor-Volume-Data

    cStor Data engine makes it possible to run ZFS in user space and use a collection of such ZFS instances running on multiple nodes to provide a replicated storage resilient against node failures.

almereyda avatar Oct 11 '22 00:10 almereyda