[SOLVED] Not possible with RAID 1 (system) and RAID 6 (data)?

General support questions
Post Reply
Amatøren
Posts: 40
Joined: 2016/03/10 15:46:19

[SOLVED] Not possible with RAID 1 (system) and RAID 6 (data)?

Post by Amatøren » 2019/08/23 07:28:24

hi.

I am working with a new server.

From GUI installation I setup RAID 1 on two SSD disks and that worked fine.

After that I setup RAID 6 on 8 disks (8 TB) both cat /proc/mdstat and mdadm -D /dev/md6 showes OK data.

After reboot it says only RAID 0 with 4 disks (mdadm -D /dev/md6) and inactive!


Any help to get?
Last edited by Amatøren on 2019/08/26 14:30:43, edited 1 time in total.

maikcat
Posts: 7
Joined: 2019/01/11 13:01:58

Re: Not possible with RAID 1 (system) and RAID 6 (data)?

Post by maikcat » 2019/08/23 09:07:36

can you post the contents of /etc/mdadm.conf please?

Michael.

Amatøren
Posts: 40
Joined: 2016/03/10 15:46:19

Re: Not possible with RAID 1 (system) and RAID 6 (data)?

Post by Amatøren » 2019/08/23 10:33:45

Thanks for reply.

I can't just now. I did some search on Google and it might be related to changing the hostname. I tried to change back but that didn't fix the problem so I have started to build a new RAID 6 with the correct hostname. Estimated work time 1,25 day's...

I will check and test on monday and post an update then.

Amatøren
Posts: 40
Joined: 2016/03/10 15:46:19

Re: Not possible with RAID 1 (system) and RAID 6 (data)?

Post by Amatøren » 2019/08/26 10:54:27

Runned new creation of RAID 6 on 8 drives.

I have not mounted the RAID in fstab yet.

Here are some output:

cat /proc/mdstat

Code: Select all

Personalities : [raid1] [raid6] [raid5] [raid4] 
md6 : active raid6 sdi1[6] sdh1[5] sdj1[7] sdd1[1] sdf1[3] sde1[2] sdc1[0] sdg1[4]
      46883358720 blocks super 1.2 level 6, 512k chunk, algorithm 2 [8/8] [UUUUUUUU]
      bitmap: 0/59 pages [0KB], 65536KB chunk

md126 : active raid1 sdb2[2] sda2[0]
      999424 blocks super 1.2 [2/2] [UU]
      bitmap: 0/1 pages [0KB], 65536KB chunk

md127 : active raid1 sdb3[2] sda3[0]
      467512320 blocks super 1.2 [2/2] [UU]
      bitmap: 0/4 pages [0KB], 65536KB chunk

unused devices: <none>
mdadm -D /dev/md6

Code: Select all

/dev/md6:
           Version : 1.2
     Creation Time : Fri Aug 23 11:05:29 2019
        Raid Level : raid6
        Array Size : 46883358720 (44711.46 GiB 48008.56 GB)
     Used Dev Size : 7813893120 (7451.91 GiB 8001.43 GB)
      Raid Devices : 8
     Total Devices : 8
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Mon Aug 26 12:30:37 2019
             State : clean 
    Active Devices : 8
   Working Devices : 8
    Failed Devices : 0
     Spare Devices : 0

            Layout : left-symmetric
        Chunk Size : 512K

Consistency Policy : bitmap

              Name : TV_server:6  (local to host TV_server)
              UUID : 44b2ed95:f56185f2:a3558345:fe5f3bfd
            Events : 15146

    Number   Major   Minor   RaidDevice State
       0       8       33        0      active sync   /dev/sdc1
       1       8       49        1      active sync   /dev/sdd1
       2       8       65        2      active sync   /dev/sde1
       3       8       81        3      active sync   /dev/sdf1
       4       8       97        4      active sync   /dev/sdg1
       5       8      113        5      active sync   /dev/sdh1
       6       8      129        6      active sync   /dev/sdi1
       7       8      145        7      active sync   /dev/sdj1
lsblk

Code: Select all

NAME              MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
sda                 8:0    0 447,1G  0 disk  
├─sda1              8:1    0   200M  0 part  
├─sda2              8:2    0   977M  0 part  
│ └─md126           9:126  0   976M  0 raid1 /boot
└─sda3              8:3    0   446G  0 part  
  └─md127           9:127  0 445,9G  0 raid1 
    ├─centos-root 253:0    0 430,1G  0 lvm   /
    └─centos-swap 253:1    0  15,7G  0 lvm   [SWAP]
sdb                 8:16   0 447,1G  0 disk  
├─sdb1              8:17   0   200M  0 part  /boot/efi
├─sdb2              8:18   0   977M  0 part  
│ └─md126           9:126  0   976M  0 raid1 /boot
└─sdb3              8:19   0   446G  0 part  
  └─md127           9:127  0 445,9G  0 raid1 
    ├─centos-root 253:0    0 430,1G  0 lvm   /
    └─centos-swap 253:1    0  15,7G  0 lvm   [SWAP]
sdc                 8:32   0   7,3T  0 disk  
└─sdc1              8:33   0   7,3T  0 part  
  └─md6             9:6    0  43,7T  0 raid6 
sdd                 8:48   0   7,3T  0 disk  
└─sdd1              8:49   0   7,3T  0 part  
  └─md6             9:6    0  43,7T  0 raid6 
sde                 8:64   0   7,3T  0 disk  
└─sde1              8:65   0   7,3T  0 part  
  └─md6             9:6    0  43,7T  0 raid6 
sdf                 8:80   0   7,3T  0 disk  
└─sdf1              8:81   0   7,3T  0 part  
  └─md6             9:6    0  43,7T  0 raid6 
sdg                 8:96   0   7,3T  0 disk  
└─sdg1              8:97   0   7,3T  0 part  
  └─md6             9:6    0  43,7T  0 raid6 
sdh                 8:112  0   7,3T  0 disk  
└─sdh1              8:113  0   7,3T  0 part  
  └─md6             9:6    0  43,7T  0 raid6 
sdi                 8:128  0   7,3T  0 disk  
└─sdi1              8:129  0   7,3T  0 part  
  └─md6             9:6    0  43,7T  0 raid6 
sdj                 8:144  0   7,3T  0 disk  
└─sdj1              8:145  0   7,3T  0 part  
  └─md6             9:6    0  43,7T  0 raid6 

And at last why only showing 16 GB?

df -h /dev/md6

Code: Select all

Filesystem      Size  Used Avail Use% Mounted on
devtmpfs         16G     0   16G   0% /dev

I need some help to get mounted the big RAID 6 system.

Same information after reboot so RAID has now same status (RAID 6 and 8 disks) after reboot.

aks
Posts: 3073
Joined: 2014/09/20 11:22:14

Re: Not possible with RAID 1 (system) and RAID 6 (data)?

Post by aks » 2019/08/26 12:22:08

Have a look in dmesg and journalctl for error messages about activating the array.
You could use detail and (possibly) scan args to mdadm to get further clues too (see https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm)

Amatøren
Posts: 40
Joined: 2016/03/10 15:46:19

Re: Not possible with RAID 1 (system) and RAID 6 (data)?

Post by Amatøren » 2019/08/26 12:46:27

Thanks.

I runned parted again and format with mkfs.ext4 then got the UUID and updated the fstab and then after a reboot I got everything OK.

I have mounted the RAID on /mnt/raid6 and df showes 44T size, 20K used, 42T Available and 1% used. All OK. :D

I don't know, but maybe I should done a reboot earlier..... :?

Amatøren
Posts: 40
Joined: 2016/03/10 15:46:19

Re: Not possible with RAID 1 (system) and RAID 6 (data)?

Post by Amatøren » 2019/08/26 12:52:06

df -h

Code: Select all

Filesystem               Size  Used Avail Use% Mounted on
/dev/mapper/centos-root  430G  3,8G  427G   1% /
devtmpfs                  16G     0   16G   0% /dev
tmpfs                     16G     0   16G   0% /dev/shm
tmpfs                     16G   11M   16G   1% /run
tmpfs                     16G     0   16G   0% /sys/fs/cgroup
/dev/md126               973M  222M  752M  23% /boot
/dev/sdb1                200M   12M  189M   6% /boot/efi
/dev/md6                  44T   20K   42T   1% /mnt/raid6
tmpfs                    3,2G  4,0K  3,2G   1% /run/user/42
tmpfs                    3,2G   20K  3,2G   1% /run/user/1000
:D

Post Reply