Built For Flash

As a switch based architecture, performance is vastly superior to daisy-chained JBOD based systems.

InfiniBand fabrics are typically 10% to 20% the latency of Ethernet based networks.

InfiniBand based storage nodes offer the lowest latency and employ remote direct memory access (RDMA) to further reduce latency and CPU utilization.

We also support Ethernet, though we recommend at least 40 Gigabit for flash based clusters.

NVMe Support

Zetavault has full support for NVMe disks. This includes PCIe based cards and 2.5" disks.

NVMe disks can be split into multiple volumes. These volumes are exported directly to the controller nodes.

These volumes can be used to cache ZFS pools or to build NVMe based ZFS pools.

Intel NVMe
Intel DC P3700 PCIe based NVMe card.

Ethernet Network

RDMA based iSCSI is supported on Ethernet adapters and switches which support RDMA over Converged Ethernet (RoCE).

Adapters and switches which do not support RoCE can use TCP/IP based iSCSI.

For flash based systems we recommend at least 40 Gigabit.

An Ethernet switch is only required when going beyond two nodes. For two node setups, a point-to-point connection can be used for the storage replication.

InfiniBand Network

InfiniBand is also supported.

56 Gigabit InfiniBand latency is typically around 1.5 micoseconds compared to around 12 microseconds for 10 Gigabit Ethernet.

Due to HPC economics, InfiniBand hardware is relatively cheap.

40 Gigabit InfiniBand hardware is much cheaper than 16 Gigabit Fibre Channel hardware for example, yet offers more performance.

An InfiniBand switch is only required when going beyond two nodes. Like Ethernet, a point-to-point connection can be used for two node setups.

Mellanox switch
8 port Mellanox 40 Gigabit InfiniBand switch.

Mellanox switch
36 port Mellanox 56 Gigabit InfiniBand switch.