Raid 0 becomes inactive in raid 1+0 when one drive is removed

Issues related to applications and software problems
Post Reply
dr0pz
Posts: 3
Joined: 2018/09/19 02:16:41

Raid 0 becomes inactive in raid 1+0 when one drive is removed

Post by dr0pz » 2018/09/19 02:36:04

Hi guys I need help regarding Raid 0 becoming inactive on Raid 1+0 when one drive is removed.
We use centos 6.5 that is booting from the usb and the raid 1+0 is mounted as /data which holds virtualbox images and files.
We format the drives using gparted and created the raid using mdadm.
This is software-raid
so the setup is
/dev/sda and /dev/sdb = /dev/md1 (raid 1)
/dev/sdc and /dev/sdd = /dev/md2 (raid 1)
then we set them up as raid 0
/dev/md1 and /dev/md2 = /dev/md0 (raid 0)

I remember testing this a few years back that when /dev/sda is removed raid 0 is still active and just reports that a drive had failed.
same for sdb, sdc and sdd.
But recently when we tested it again we had found out that raid 0 is becoming inactive when any one of the four drives is removed.
So this is basically a change in behavior. I had tested this with the old kernel version on which we first tested the raid 1+0 and the new version via yum update and the result is the same.

Does anyone know why this happens?

Thanks!

Whoever
Posts: 1357
Joined: 2013/09/06 03:12:10

Re: Raid 0 becomes inactive in raid 1+0 when one drive is removed

Post by Whoever » 2018/09/19 05:54:05

Are you absolutely sure that you don't already have a failed drive?

dr0pz
Posts: 3
Joined: 2018/09/19 02:16:41

Re: Raid 0 becomes inactive in raid 1+0 when one drive is removed

Post by dr0pz » 2018/09/19 08:28:58

Hi that is what we are simulating. We are simulating a failed drive so we just turn off the server then pull one drive out then boot the system.
cat /proc/mdstat with all drives will produce something like
Personalities : [raid1] [raid0]
md125 : active raid0 md127[0] md126[1]
1953262592 blocks super 1.2 512k chunks

md126 : active raid1 sdc[2] sdd[1]
976631360 blocks super 1.2 [2/2] [UU]

md127 : active raid1 sdb[1] sda[2]
976631360 blocks super 1.2 [2/2] [UU]

But when we took one drive out it becomes

Personalities : [raid1] [raid0]
md125 : inactive raid0
1953262592 blocks super 1.2 512k chunks

md126 : active raid1 sdd[1]
976631360 blocks super 1.2 [2/2] [U_]

md127 : active raid1 sdb[1] sda[2]
976631360 blocks super 1.2 [2/2] [UU]

In theory raid 0 should still work as the raid 1 array only lost one drive. and therefore should still work.

Thanks!

User avatar
TrevorH
Site Admin
Posts: 33202
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: Raid 0 becomes inactive in raid 1+0 when one drive is removed

Post by TrevorH » 2018/09/19 09:33:38

We use centos 6.5
Don't. It's 5 years old and you should be on 6.10 not 6.5.
The future appears to be RHEL or Debian. I think I'm going Debian.
Info for USB installs on http://wiki.centos.org/HowTos/InstallFromUSBkey
CentOS 5 and 6 are deadest, do not use them.
Use the FAQ Luke

dr0pz
Posts: 3
Joined: 2018/09/19 02:16:41

Re: Raid 0 becomes inactive in raid 1+0 when one drive is removed

Post by dr0pz » 2018/09/20 01:32:49

Hi, I had also tried this on centos 7 and centos 6.10... Behavior is still the same.
Centos 7 OS gets broken when one drive is pulled out which is weird because the OS disk is outside of the raid array.

Post Reply