I’ve been running a copy of OpenMediaVault on an old HP Proliant N40 Microserver with 4x2TB Seagate Green drives in a RAID 5 configuration. The boot drive was an old 4GB flash drive plugged directly into the motherboard.
Initial Impressions
Installation and setup was a breeze, I found it all pretty intuitive and I enjoyed the relative ease of adding new functionality through plugins. I transferred all my data to the OMV and setup shares to my data for easy access, I then ran into a hiccup when one of the Seagate drives failed. The first time I attempted a fix I got stuck and put it aside for a month or more.
Raid5 Fix
Then last weekend I sat down determined to see if I could get the RAID back up and running. I removed the faulty 2TB and installed a new WD Green 2TB.
Because the RAID was down to 3 drives and therefore minus any redundancy, OMV would not start the RAID on boot.
You need to be in the CLI (Command Line Interface) with root access to implement these commands. Run whatever is inside the brackets, exclude the brackets.
Type “su
” into the CLi and enter your password when prompted, you should have root access
I ran a “cat /proc/mdstat
” which showed I had an array named “md0
” with 3 drives and that they were inactive and an unused device.
I then forced the md0 array to start with “mdadm --assemble --force /dev/md0 /dev/sdb /dev/sdc /dev/sdd
“, were sdb, sdc & sdd where the 3 original drives. Checked the array status with “cat /proc/mdstat
” which showed it was active. I then added the new drive to the array with “mdadm --add /dev/md0 /dev/sda
“.
Then “cat /proc/mdstat
” showed that the array was recovering. A few hours later and my array was back up and running. I had to start/mount it from the WebGUI.
Other useful commands
mdadm --detail /dev/md0
I’ve since added an additional drive and converted it to a RAID6 following this guide.
This to re-install OMV “apt-get install --reinstall openmediavault
“