ZFS pools can be migrated between controllers.
In the event of controller failure, the controllers ZFS pools are automatically handled by another controller.
Migrate ZFS pools between controllers without any client downtime.
Pool hdd is currently running on controller1
Pool hdd is migrating from controller1 to controller2
Migration is complete.
If new firmware has been installed on controller1, it can now be rebooted.
This creates an “always on” infrastructure.
This video shows a VMware guest running Windows Server 2012 R2.
The Iometer benchmark tool is started, and the storage pool is migrated from controller1 to controller2.
A video is also played while migration takes place.
Each storage node has a continuous heartbeat packet written to it.
Heartbeat is done exclusively in persistent storage.
In the event of controller failure, the controller is fenced and the ZFS pools are handled by another controller.
controller1 has failed and controller2 has taken over the management of pool hdd.
This happens automatically for head based controllers and storage based controllers which have IPMI support.
For controllers without internal storage, fencing is handled by the controllers using SCSI-3 persistent reservations.
This provides exclusive write access to each storage node by only one controller at any time.
This is known as "front-end" fencing.
For controllers with internal storage, fencing is handled by IPMI. Dell DRAC, HP iLO, Supermicro ASPEED and most other vendors are supported.
If using dedicated storage nodes, fencing is handled by the storage nodes by removing the fenced controller from the storage node's SAN service.
This is known as "back-end" fencing.