Adding the new drives to the RAID array

This section covers the steps required to add the new drives to the RAID array.

Ensure that the set of eight (8) drives is properly installed according to the Installing the drives section.

Procedure performed by: Customer or field service

To add the newly installed drives to the RAID array, follow these steps:

  1. Create the RAID5 device for the eight drives:

    #/usr/share/tacp/lenovo/add_disks_lenovo.py

    Following is an example output:

    all disks are attached correctly to this controller.
    Raid device created /dev/md123
    Congratulations. You have successfully created the raid device
    Please run the following command to continue: mdadm --grow /dev/md/md50 --raid-device=2 --add /dev/md123
    Success

    The script provides a -verbose option if you want to know details of how the RAID5 device is created.

  2. Grow the RAID50 device

    mdadm --grow /dev/md/md50 --raid-device=2 --add /dev/md123

    Following is an example output:

    mdadm: level of /dev/md/md50 changed to raid4
    mdadm: added /dev/md123
    Note:

    This step is not scripted because this may take up to a few hours depending on how much data is stored in the RAID array.

  3. Check the RAID rebuild status:

    cat /proc/mdstat

    Following is an example output:

    Personalities : [raid1] [raid6] [raid5] [raid4] [raid0]
    
    md123 : active raid5 dm-22[7](S) dm-21[6] dm-20[5] dm-19[4] dm-18[3] dm-17[2] dm-16[1] dm-15[0]
      4657379328 blocks super 1.2 level 5, 16k chunk, algorithm 2 [7/7] [UUUUUUU]
      bitmap: 1/6 pages [4KB], 65536KB chunk
    
    md124 : active raid4 md123[2] md125[0]
      4657247232 blocks super 1.2 level 4, 512k chunk, algorithm 5 [3/2] [U__]
      [>....................] reshape = 1.3% (62142976/4657247232) finish=424.9min speed=180210K/sec
    
    md125 : active raid5 dm-7[7](S) dm-6[6] dm-5[5] dm-4[4] dm-3[3] dm-2[2] dm-1[1] dm-0[0]
      4657379328 blocks super 1.2 level 5, 16k chunk, algorithm 2 [7/7] [UUUUUUU]
      bitmap: 1/6 pages [4KB], 65536KB chunk
    
    md126 : active raid1 sda[1] sdb[0]
      118778880 blocks super external:/md127/0 [2/2] [UU]
    
    md127 : inactive sdb[1](S) sda[0](S)
      6306 blocks super external:imsm
    
    unused devices: <none>
  4. Check and calculate the completion time of the process.

    As displayed in the example output from the previous step, the reshape process has a completion time of 424.9 minutes (finish=424.9min). In hours, this is a little over seven (7) hours (424.9 minutes / 60 = 7.08 hours).

  5. Wait until the completion time is up (in this example, more than seven (7) hours), and then check that the process has indeed finished:

    cat /proc/mdstat

    Following is an example output:

    Personalities : [raid1] [raid6] [raid5] [raid4] [raid0]
    
    md123 : active raid5 dm-22[7](S) dm-21[6] dm-20[5] dm-19[4] dm-18[3] dm-17[2] dm-16[1] dm-15[0]
      4657379328 blocks super 1.2 level 5, 16k chunk, algorithm 2 [7/7] [UUUUUUU]
      bitmap: 0/6 pages [0KB], 65536KB chunk
    
    md124 : active raid0 md123[2] md125[0]
      9314494464 blocks super 1.2 512k chunks
    
    md125 : active raid5 dm-7[7](S) dm-6[6] dm-5[5] dm-4[4] dm-3[3] dm-2[2] dm-1[1] dm-0[0]
      4657379328 blocks super 1.2 level 5, 16k chunk, algorithm 2 [7/7] [UUUUUUU]
      bitmap: 0/6 pages [0KB], 65536KB chunk
    
    md126 : active raid1 sda[1] sdb[0]
      118778880 blocks super external:/md127/0 [2/2] [UU]
    
    md127 : inactive sdb[1](S) sda[0](S)
      6306 blocks super external:imsm
    
    unused devices: <none>
  6. Calculate new size of the RAID50 device:

    nvme list

    Following is an example output:

    Node             SN                     Namespace Usage                     Format         FW Rev
    ---------------- ------------------ ... --------- ------------------------- -------------- --------
    /dev/nvme0n1     S3HCNX0JC02194         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme10n1    S3HCNX0K100055         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme11n1    S3HCNX0K100061         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme12n1    S3HCNX0K100046         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme13n1    S3HCNX0K100092         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme14n1    S3HCNX0K100076         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme15n1    S3HCNX0K100015         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme1n1     S3HCNX0JC02193         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme2n1     S3HCNX0JC02233         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme3n1     S3HCNX0JC02191         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme4n1     S3HCNX0JC02189         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme5n1     S3HCNX0JC02198         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme6n1     S3HCNX0JC02188         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme7n1     S3HCNX0JC02195         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme8n1     S3HCNX0K100085         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    /dev/nvme9n1     S3HCNX0K100072         1         795.00 GB / 795.00 GB     512 B + 0 B    GPNA9B3Q
    

    In the example output, 16 drives can be observed as part of RADI50 device. Each drive has a capacity of 795 GB.

    The RAID setup is organized by grouping eight (8) drives in a RAID5 device. Then all RAID5 devices form a RAID0 device.

    Note:

    For each RAID5 device there is a spare drive.

    The total capacity of the RAID50 device is given by the following formula:

    total capacity = (size of a single drive) X (number of drives in the RAID5 configuration — 2) X (number of RAID5 devices)

    Replacing the terms in the formula with the values from our example, the total capacity of the RAID50 device is:

    total capacity = 795 GB X (8 — 2) X 2 = 9540 GB

  7. Check the calculated size of the RAID50 device against the size shown by the following command:

    mdadm --detail /dev/md/md50

    Following is an example output:

    /dev/md/md50:
      Version : 1.2
      Creation Time : Thu Aug 16 19:35:39 2018
      Raid Level : raid0
      Array Size : 9314494464 (8882.99 GiB 9538.04 GB)
      Raid Devices : 2
      Total Devices : 2
      Persistence : Superblock is persistent
    
      Update Time : Fri Aug 17 04:23:27 2018
      State : clean
      Active Devices : 2
      Working Devices : 2
      Failed Devices : 0
      Spare Devices : 0
    
      Chunk Size : 512K
    
    Consistency Policy : none
    
      Name : any:md50
      UUID : d35a4763:b4b9b490:b85614db:6aeb696a
      Events : 2986
    
    Number Major Minor RaidDevice State
    0      9     125   0          active sync /dev/md/md5
    2      9     123   1          active sync /dev/md/md5_2

    The calculated size is almost the size displayed in the example output: 9538.04 GB.

  8. Check the RAID level:

    mdadm --detail /dev/md/md50

    In the example output from the previous step, the RAID level is 0.

    Note:

    If the RAID level is RAID4, then change the RAID4 level to RAID0:

    mdadm --grow --level 0 --raid-devices=2 /dev/md/md50

    mdadm: level of /dev/md/md50 changed to raid0
    Attention:
    • The new RAID0 array needs to be reshaped a second time. This can take up to several hours to complete, depending on the size of the array. For example, expanding from 16 TB to 32 TB takes 11-12 hours.

    • Ensure the Array Size has increased to the correct amount and that all the RAID5 devices are included. If not, stop and do not continue! You may need to reshape to RAID4 and then back to RAID0.

    • Each set of eight (8) drives has one (1) parity drive and one (1) spare drive.

    • The total Array Size for eight (8) drives is:

      total array size = 6 X (size of a single drive)

    • The total Array Size for 16 drives is:

      total array size = (6+6) X (size of a single drive)

    • When the reshape process is completed it look similar to the following example:

      mdadm --detail /dev/md/md50

      /dev/md/md50:
        Version : 1.2
        Creation Time : Thu Aug 16 19:35:39 2018
        Raid Level : raid0
        Array Size : 9314494464 (8882.99 GiB 9538.04 GB)
        Raid Devices : 2
        Total Devices : 2
        Persistence : Superblock is persistent
      
        Update Time : Fri Aug 17 04:23:27 2018
        State : clean
        Active Devices : 2
        Working Devices : 2
        Failed Devices : 0
        Spare Devices : 0
      
        Chunk Size : 512K
      
      Consistency Policy : none
      
        Name : any:md50
        UUID : d35a4763:b4b9b490:b85614db:6aeb696a
        Events : 2986
      
      Number Major Minor RaidDevice State
      0      9     125   0          active sync /dev/md/md5
      2      9     123   1          active sync /dev/md/md5_2
  9. Check the size of the RAID50 device in 512–byte sectors:

    blockdev --getsz /dev/md/md50

    18,628,988,928

    The first 2048 512– byte sectors are used to store the internal ThinkAgile CP VDO-specific metadata.

The RAID50 device now includes the new storage drives.

After including the new drives in the RAID array, increase the size of the VDO data. See Increasing the size of the VDO.