====== performance ======
PBS needs high IOPS performance. Benefit of ZFS would be that you can accelerate it using
SSDs to store the metadata. But won't help that much with verify tasks (but still a bit as the HDDs
are hit by less IO, because all the metadata that is read/written from the SSD doesn't has been
read/written from the HDDs).
In general HDDs shouldn't be used with PBS, atleast not if you store alot of backups. And if you
still do it, it's highly recommended to also use SSDs for storing the metadata.
If using L2ARC switch it to MFU algo only to prevent writing lot of data with every backup: [[linux:fs:zfs:tuning#tune_l2arc_for_backups]]
How to calculate special VDEV size: [[https://forum.level1techs.com/t/zfs-metadata-special-device-z/159954|ZFS Metadata Special Device: Z]]
So:
* RAIDZ on HDDS is very slow (due to IO OPS limit to single HDD). Throughput is n x HDD, but IOOPS equals single HDD
* For HDDs use stripped mirrors to multiply IOOPS (i.e. for 6 disk: 3 stripes per 2 hdd in mirror)
* Use DRAID - where groups of stripes are used by design.
* Use at least separate ZFS dataset for backup, then set block size to zfs set recordsize=1M YourPoolName/DatasetUsedAsDatastore
* Disable atime zfs set atime=off backup2
Don't use a raidz1/2/3 as PBS needs high IOPS performance and IOPS performance will only scale with the number of striped vdevs, not with the number of disks. So 20 disks in a raidz wouldn't be faster than a single HDD.
And resilvering would also take forever.