Software RAID on Ubuntu Linux


I recently purchased myself a HP ProLiant MicroServer with a stack of Western Digital Red hard drives with the intention of building a media server. After stumbling through a few tutorials that didn’t work very well, a colleague took pity on me and gave me some really good advice on how to get things running – so I figured a post on how to build a software-based RAID using Ubuntu Linux might be a reasonably useful thing. 🙂


  • Server hardware and dedicated hard drives for the RAID array
    (shouldn’t really matter what specific hardware gets used)
  • Ubuntu Server 13.04 or later
  • Patience


First up, install the server edition of Ubuntu Linux. All the default install options are probably fine; there are enough guides on the Internets to cover setting this up if you get lost.

Once Linux is up and running and all of the hard drives have been detected appropriately (use lsblk to confirm device names), use parted to configure the partition table and create a partition on each disk..:

sudo parted -a optimal
select /dev/sda
mklabel gpt
  -> raid_d1
  -> ext4
  -> 1MiB
  -> 3TiB

Rinse and repeat for each drive – you’ll need to change the device being selected each time (use the output from lsblk as a guide).

For the mkpart section, the question ‘Partition name?’ is mostly irrelevant (I went with “raid_d1”, “raid_d2” etc) .. and for the question ‘End?’, use the advertised size of the drive (in my case, 3 Terabytes).

In hindsight, because I’m using the full drive for the array (vs. some for the array and some for other partitions), I’m not even sure that creating partitions using parted > mkpart is entirely necessary. It doesn’t take long though, so I guess it doesn’t hurt. 😉

Time to build the array (this is where having patience comes in)..:

sudo mdadm --create /dev/md0 --chunk=256 --level=5 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd

My colleague suggested a chunk size of 256k (vs. the default of 512k) for better performance. Other than that, the command should make plenty of sense: you’re creating a RAID 5 array at /dev/md0 with 4 devices (as listed at the end of the command).

In case it complains mdadm isn’t installed..:

sudo aptitude install mdadm

This will take ages .. ~18 hours for me. Monitor the progress with..:

watch -n5 cat /proc/mdstat

Once the array has completed building, you need to..:

  1. Update /etc/mdadm/mdadm.conf to list the hard drives that form part of the array (DON’T list the array device itself)
  2. Update initrd
  3. Reboot

I customised the following two lines in mdadm.conf (no need to change anything else from defaults)..:

DEVICE /dev/sda /dev/sdb /dev/sdc /dev/sdd
MAILADDR some@email.address


sudo update-initramfs -u
sudo reboot

After the reboot, your RAID array should turn up at /dev/md0. Confirm with the following..:

sudo mdadm --detail /dev/md0

If it’s not there, try..:

sudo mdadm --detail /dev/md127

If the array turns up at /dev/md127, it could mean that you have problems with mdadm.conf – see this Ubuntu Forums thread for further reading.

Now the array is configured, time to format it. Choice of filesystem is up to you, but I chose XFS (it plays nice with RAID and large file sizes). First, install XFS as it’s not a part of the standard Ubuntu Server image, then create the filesystem..:

sudo aptitude install xfsprogs
sudo mkfs.xfs -L data /dev/md0

Finally, get the UUID of the file system and load that in to /etc/fstab. Use blkid to find the UUID that belongs to /dev/md0..:

sudo blkid
/dev/md0: LABEL="data" UUID="3d3cf1c1-6015-4b5d-ac08-e38832fa29d6" TYPE="xfs"

Now, add that to /etc/fstab (I’m using /data as my mount point, but whatever works best for you)..:

# RAID array
UUID=3d3cf1c1-6015-4b5d-ac08-e38832fa29d6 /data xfs defaults 0 0

Reboot once more for good measure. Hopefully you get something like this as your df -h output..:

Filesystem Size Used Avail Use% Mounted on
/dev/sde1 28G 1.4G 25G 6% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
udev 929M 8.0K 929M 1% /dev
tmpfs 188M 284K 188M 1% /run
none 5.0M 0 5.0M 0% /run/lock
none 939M 4.0K 939M 1% /run/shm
none 100M 0 100M 0% /run/user
/dev/md0 8.2T 15G 8.2T 1% /data