Is my fake RAID 1 correct?

Issues related to applications and software problems
Post Reply
yngens
Posts: 29
Joined: 2010/10/24 02:02:35

Is my fake RAID 1 correct?

Post by yngens » 2014/11/15 20:16:18

Hi All,

Trying to correctly configure software RAID1 (two mirrored hard drives) on a basic 1 unit SuperMicro server. I am not very much experienced in this, so please bear with my questions.

I've selected RAID in BIOS settings, rebooted, when prompted clicked CTR+I, configured two identical hard drives in the system as mirrored RAID 1, proceeded to CentOS 7 installation. Everything went smoothly, the CentOS 7 is up and running fine. The only thing that I doubt is if my RAID is correct, because running

Code: Select all

cat /proc/mdstat
is showing md127 is inactive:

Code: Select all

cat /proc/mdstat
Personalities : [raid1] 
md126 : active raid1 sda[1] sdb[0]
      1953511424 blocks super external:/md127/0 [2/2] [UU]
      
md127 : inactive sda[1](S) sdb[0](S)
      6056 blocks super external:imsm

unused devices: <none>
I am not sure if it is ok for the system to show md126 and md127 instead of md0 and md1, but the main concern here is not naming, but the fact it shows md127 is inactive. And here some more relevant data:

Code: Select all

mdadm --detail /dev/md126
/dev/md126:
      Container : /dev/md/imsm0, member 0
     Raid Level : raid1
     Array Size : 1953511424 (1863.01 GiB 2000.40 GB)
  Used Dev Size : 1953511556 (1863.01 GiB 2000.40 GB)
   Raid Devices : 2
  Total Devices : 2

          State : clean 
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0


           UUID : 8a5f08ea:6ebd66e4:4cdcee75:4b329813
    Number   Major   Minor   RaidDevice State
       1       8        0        0      active sync   /dev/sda
       0       8       16        1      active sync   /dev/sdb

Code: Select all

mdadm --detail /dev/md127
/dev/md127:
        Version : imsm
     Raid Level : container
  Total Devices : 2

Working Devices : 2


           UUID : 61b9ae2f:a9592a85:bf49b2cf:284a01e8
  Member Arrays : /dev/md/Volume0

    Number   Major   Minor   RaidDevice

       0       8       16        -        /dev/sdb
       1       8        0        -        /dev/sda

Code: Select all

fdisk -l

Disk /dev/sdb: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x000a4945

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1   *        2048     1026047      512000   83  Linux
/dev/sdb2         1026048  3907022847  1952998400   8e  Linux LVM

Disk /dev/sda: 2000.4 GB, 2000398934016 bytes, 3907029168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x000a4945

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     1026047      512000   83  Linux
/dev/sda2         1026048  3907022847  1952998400   8e  Linux LVM

Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes, 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
debug2: channel 0: window 999327 sent adjust 49249
Disk identifier: 0x000272e5

   Device Boot      Start         End      Blocks   Id  System
/dev/sdc1            2048  1953520064   976759008+  83  Linux

Disk /dev/md126: 2000.4 GB, 2000395698176 bytes, 3907022848 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk label type: dos
Disk identifier: 0x000a4945

      Device Boot      Start         End      Blocks   Id  System
/dev/md126p1   *        2048     1026047      512000   83  Linux
/dev/md126p2         1026048  3907022847  1952998400   8e  Linux LVM

Disk /dev/mapper/centos-swap: 16.9 GB, 16919822336 bytes, 33046528 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/centos-home: 1929.3 GB, 1929262399488 bytes, 3768090624 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
/dev/sdc is just a storage disk, so is not relevant to the issue here.

What else? Not sure if info about drives needed, but just in case:

Code: Select all

mdadm --examine /dev/sda
/dev/sda:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.1.00
    Orig Family : 8b66d383
         Family : 8b66d383
     Generation : 00bece83
     Attributes : All supported
           UUID : 61b9ae2f:a9592a85:bf49b2cf:284a01e8
       Checksum : 8008d559 correct
    MPB Sectors : 1
          Disks : 2
   RAID Devices : 1

  Disk00 Serial : W240JN13
          State : active
             Id : 00000000
    Usable Size : 3907023112 (1863.01 GiB 2000.40 GB)

[Volume0]:
           UUID : 8a5f08ea:6ebd66e4:4cdcee75:4b329813
     RAID Level : 1
        Members : 2
          Slots : [UU]
    Failed disk : none
      This Slot : 0
     Array Size : 3907022848 (1863.01 GiB 2000.40 GB)
   Per Dev Size : 3907023112 (1863.01 GiB 2000.40 GB)
  Sector Offset : 0
    Num Stripes : 15261808
     Chunk Size : 64 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean

  Disk01 Serial : Z2407WTD
          State : active
             Id : 00000001
    Usable Size : 3907023112 (1863.01 GiB 2000.40 GB)

Code: Select all

mdadm --examine /dev/sdb
/dev/sdb:
          Magic : Intel Raid ISM Cfg Sig.
        Version : 1.1.00
    Orig Family : 8b66d383
         Family : 8b66d383
     Generation : 00bece81
     Attributes : All supported
           UUID : 61b9ae2f:a9592a85:bf49b2cf:284a01e8
       Checksum : 8008d557 correct
    MPB Sectors : 1
          Disks : 2
   RAID Devices : 1

  Disk01 Serial : Z2407WTD
          State : active
             Id : 00000001
    Usable Size : 3907023112 (1863.01 GiB 2000.40 GB)

[Volume0]:
           UUID : 8a5f08ea:6ebd66e4:4cdcee75:4b329813
     RAID Level : 1
        Members : 2
          Slots : [UU]
    Failed disk : none
      This Slot : 1
     Array Size : 3907022848 (1863.01 GiB 2000.40 GB)
   Per Dev Size : 3907023112 (1863.01 GiB 2000.40 GB)
  Sector Offset : 0
    Num Stripes : 15261808
     Chunk Size : 64 KiB
       Reserved : 0
  Migrate State : idle
      Map State : normal
    Dirty State : clean

  Disk00 Serial : W240JN13
          State : active
             Id : 00000000
    Usable Size : 3907023112 (1863.01 GiB 2000.40 GB)
Tried to stop and reassemble as advised on http://www.linuxquestions.org/questions ... ost4689697, however running

Code: Select all

mdadm --stop /dev/md127
is giving:

Code: Select all

mdadm --stop /dev/md127
mdadm: Cannot stop container /dev/md127: member md126 still active
Would appreciate much if someone could help me to set this right. Thanks!

gerald_clark
Posts: 10642
Joined: 2005/08/05 15:19:54
Location: Northern Illinois, USA

Re: Is my fake RAID 1 correct?

Post by gerald_clark » 2014/11/15 20:22:19

We do not support BIOS fakeraid. You need to turn off the BIOS RAID and remove the RAID metadata from the drives before using software raid.
You cannot boot from software raid 0, only raid 1.

yngens
Posts: 29
Joined: 2010/10/24 02:02:35

Re: Is my fake RAID 1 correct?

Post by yngens » 2014/11/16 05:42:42

gerald_clark wrote:We do not support BIOS fakeraid. You need to turn off the BIOS RAID and remove the RAID metadata from the drives before using software raid.
You cannot boot from software raid 0, only raid 1.
Oh, that's not good news for me as I have several servers in production running like this. I wonder how bad it is, does this need to be re-done urgently? Is it possible to fix this on production servers without risking to loose data?
Last edited by yngens on 2014/11/16 17:49:48, edited 1 time in total.

gerald_clark
Posts: 10642
Joined: 2005/08/05 15:19:54
Location: Northern Illinois, USA

Re: Is my fake RAID 1 correct?

Post by gerald_clark » 2014/11/16 05:53:34

mdadm is showing two arrays using the same partitions.
Your raid partitions are not set as type 'fd'.

If the BIOS fakeraid were actually working like a hardware RAID, you would not be see the component devices.
Stick with JBOD and mdadm.

yngens
Posts: 29
Joined: 2010/10/24 02:02:35

Re: Is my fake RAID 1 correct?

Post by yngens » 2014/11/16 17:51:15

gerald_clark wrote:mdadm is showing two arrays using the same partitions.
Your raid partitions are not set as type 'fd'.

If the BIOS fakeraid were actually working like a hardware RAID, you would not be see the component devices.
Stick with JBOD and mdadm.
Thanks Gerald for your input, but it didn't answer my question. Are there any ways to fix this kind of setup on production servers without risking to loose data?

Post Reply