dRAID

declustered RAID

From ZFS RAID Level Considerations

In a ZFS dRAID (declustered RAID) the hot spare drive(s) participate in the RAID. Their spare capacity is reserved and used for rebuilding when one drive fails. This provides, depending on the configuration, faster rebuilding compared to a RAIDZ in case of drive failure. More information can be found in the official OpenZFS documentation. [1]
Note 	dRAID is intended for more than 10-15 disks in a dRAID. A RAIDZ setup should be better for a lower amount of disks in most use cases.
Note 	The GUI requires one more disk than the minimum (i.e. dRAID1 needs 3). It expects that a spare disk is added as well.

    dRAID1 or dRAID: requires at least 2 disks, one can fail before data is lost

    dRAID2: requires at least 3 disks, two can fail before data is lost

    dRAID3: requires at least 4 disks, three can fail before data is lost

From dRAID Introduction

dRAID is a variant of raidz that provides integrated distributed hot spares which allows for 
faster resilvering while retaining the benefits of raidz. A dRAID vdev is constructed from multiple 
internal raidz groups, each with D data devices and P parity devices. These groups are distributed 
over all of the children in order to fully utilize the available disk performance. This is known as 
parity declustering and it has been an active area of research. The image below is simplified, but 
it helps illustrate this key difference between dRAID and raidz.
We especially caution storage newbies to be careful with draid—it's a significantly more complex layout than a pool with traditional vdevs. The fast resilvering is fantastic—but draid takes a hit in both compression levels and some performance scenarios due to its necessarily fixed-length stripes.
draid2:2d:14c:1s
     |  |  |  |
     |  |  |  L-> one spare
     |  |  |
     |  |  L-> total number of disks to use
     |  |
     |  L-> number of data disks per disk group
     |
     L-> number of parity disks per disk group

Usage

Create DRAID with 1 parity and 2 redundancy (on 3 disks):

zpool create backup draid1:2d <disk1> <disk2> <disk3>