zfs-ha
zfs-ha copied to clipboard
Spread ZFS Pools across nodes
Hi, could I create various group resources (group-vol1, group-vol2, group-vol3, gruop-vol4) and spread them across cluster nodes?
Yes, you can have different pools assigned to different cluster nodes. A common set up with two hosts is to run one pool and host one, and another pool on host two. It’s a way of leveraging all the resources available and having reasonable failover in the event of a node’s unavailability.
Edmund White
On Apr 3, 2021, at 2:44 PM, Riccardo @.***> wrote:
Hi, could I create various group resources (group-vol1, group-vol2, group-vol3, gruop-vol4) and spread them across cluster nodes?
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHubhttps://github.com/ewwhite/zfs-ha/issues/37, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ABJSFNX2WY7BIIBCEYVKAQ3TG5VYZANCNFSM42KSCA3Q.
And could it work using 2 SAS controllers per host connecting each to different chains? Like Controller 1 Host 1 -> Enclosure Chain 1 Controller 1 Host 2 -> Enclosure Chain 1 Controller 2 Host 1 -> Enclosure Chain 2 Controller 2 Host 2 -> Enclosure Chain 2
each node serving 2 volumes per chain?
In Kubernetes environments, one can also leverage OpenEBS cStor for replicated pools across multiple nodes:
- https://github.com/mayadata-io/cstor/wiki/Using-uZFS-for-storing-cStor-Volume-Data
cStor Data engine makes it possible to run ZFS in user space and use a collection of such ZFS instances running on multiple nodes to provide a replicated storage resilient against node failures.