Hybrid RAID

From Compilenix Wiki
Jump to: navigation, search

Terminology

The term "hybrid RAID" has been coined by the vendor Synology as SHR (Synology Hybrid Raid). It creates a sort of RAID using all available space, even when disks in the RAID are of different Size.

Hybrid RAID in this case will describe functionality of such a system and means to build one yourself with linux tools.

Requirements

Depending on your needs you need some of the following:

  • mdadm (required)
  • lvm(2) (required if you want multiple volumes, probably with different filesystems)
  • btrfs (if you want bitrot-detection)
  • any filesystem

Hardware requirements are as following:

  • two ore more disks

Principle

Hybrid RAID builds multiple RAID-sets on harddisks of different sizes. These sets are combined into one pool. Volumes are created from this pool. SHR can be expanded by adding HDDs or by replacing small ones with bigger ones.

Example

The system contains the following HDDs:

  • 500GB
  • 500GB
  • 1TB
  • 2TB
  • 2TB

Traditionaly the RAID would use 500GB of space on each HDD, resulting in 2TB, 33.3% (RAID-5) or 1.5TB, 25% (RAID-6) usable space. You could crate 2 RAID-1-sets to use the space more efficiently (500GB + 2TB = 2.5TB, 41.7%) wasting the 1TB-drive.

Hybrid RAID as described below will use partitions in the RAID instead of whole disks. This results in multiple RAID-sets:

Disk 500GB 500GB 1TB 2TB 2TB Space
Set 1 500GB 500GB 500GB 500GB 500GB RAID-6: 1.5TB
Set 2 500GB 500GB 500GB RAID-5: 1TB
Set 3 1TB 1TB RAID-1: 1TB

This results in 3.5TB usable space of 6TB raw (58.3%).

Building your own

This section will describe the above setup as it would grow over time. Assumption:

  • /dev/sda is system-HDD
  • /dev/sdb and /dev/sdc are 500GB
  • /dev/sdd is 1TB
  • /dev/sde and /dev/sdf are 2TB

Initial Setup

This part requires 2x 500GB HDDs.</ br> BTRFS is used as a filesystem as it checksums all blocks preventing bitrot or bit-corruption.</ br> RAID in itself is not capable of detecting flipped bits or silent read-errors.

  1. partition the harddisks to contain a partition spanning the whole drive. It doesn't matter, if GPT or DOS lables are used, as long as the partitions are the same size.
  2. create a md-device (RAID-1): mdadm /dev/md1 --create --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1
  3. create lvm-physical-device: pvcreate /dev/md1
  4. create lvm-volume-group: vgcreate storage /dev/md1
  5. create lvm-logical-volume: lvcreate -nVolume1 -L500G storage
  6. create filesystem: mkfs.btrfs -LVolume1 /dev/storage/Volume1
  7. mount somewhere

The storage is 500GB in size.

First expansion

Somehow we ended up with a spare 1TB HDD and decided to shove it into the system.

  1. partition the harddisk to contain a partition the same size of the one on the 500GB drives. The second partition takes up the remaining space of the drive.
  2. modify the md-device to be a RAID-5: mdadm /dev/md1 --grow --level=5 --raid-devices=3 --add /dev/sdd1
  3. wait for the raid-rebuild to finish
  4. grow the lvm-physical-volume: pvresize /dev/md1
  5. grow the logical-volume: lvresize -L+500G /dev/storage/Volume1
  6. grow the filesystem: btrfs filesystem resize /dev/storage/Volume1

The storage is 1TB in size. The additional 500GB on the 1TB drive are wasted.

Second expansion

Additional 2x 2TB drives have been acquired and will be used in this setup.

  1. partition the harddisks to contain both partitions of the 1TB drive and an additional on taking up the remaining space of the drive.
  2. modify the md-device to be a RAID-6: mdadm /dev/md1 --grow --level=6 --raid-devices=4 --add /dev/sde1
  3. wait for rebuild
  4. modify the md-device again to add the remaining drive: mdadam /dev/md1 --grow --raid-devices=5 --add /dev/sdf1
  5. add a new md-device (RAID-5): mdadm /dev/md2 --create --level=5 --raid-devices=3 /dev/sdd2 /dev/sde2 /dev/sdf2
  6. add a new md-device (RAID-1): mdadm /dev/md3 --create --level=1 --raid-devices=1 /dev/sde3 /dev/sdf3
  7. wait for the rebuild from step 4
  8. grow physical-volume: pvresize /dev/md1
  9. create new physical-volume: pvcreate /dev/md2
  10. create new physical-volume: pvcreate /dev/md3
  11. grow volume-group: vgextend storage /dev/md2 /dev/md3
  12. grow logical-volume: lvresize -L+2500G /dev/storage/Volume1
  13. grow filesystem: btrfs filesystem resize /dev/storage/Volume1

The storage is 3.5TB in size, the lost 500GB in the first expansion have been reclaimed.