Poor write performance of software RAID10 array of 8 SSD drives

by Evgeny Terekhov   Last Updated September 14, 2018 16:00 PM

I have server with Supermicro X10DRW-i motherboard and RAID10 array of 8 KINGSTON SKC400S SSDs; OS is CentOS 6

  # cat /proc/mdstat 
Personalities : [raid10] [raid1] 

md2 : active raid10 sdj3[9](S) sde3[4] sdi3[8] sdd3[3] sdg3[6] sdf3[5] sdh3[7] sdb3[1] sda3[0]
      3978989568 blocks super 1.1 512K chunks 2 near-copies [8/8] [UUUUUUUU]
      bitmap: 9/30 pages [36KB], 65536KB chunk

  # mdadm --detail /dev/md2                
    /dev/md2:
            Version : 1.1
      Creation Time : Wed Feb  8 18:35:14 2017
         Raid Level : raid10
         Array Size : 3978989568 (3794.66 GiB 4074.49 GB)
      Used Dev Size : 994747392 (948.67 GiB 1018.62 GB)
       Raid Devices : 8
      Total Devices : 9
        Persistence : Superblock is persistent

      Intent Bitmap : Internal

        Update Time : Fri Sep 14 15:19:51 2018
              State : active 
     Active Devices : 8
    Working Devices : 9
     Failed Devices : 0
      Spare Devices : 1

             Layout : near=2
         Chunk Size : 512K

               Name : ---------:2  (local to host -------)
               UUID : 8a945a7a:1d43dfb2:cdcf8665:ff607a1b
             Events : 601432

        Number   Major   Minor   RaidDevice State
           0       8        3        0      active sync set-A   /dev/sda3
           1       8       19        1      active sync set-B   /dev/sdb3
           8       8      131        2      active sync set-A   /dev/sdi3
           3       8       51        3      active sync set-B   /dev/sdd3
           4       8       67        4      active sync set-A   /dev/sde3
           5       8       83        5      active sync set-B   /dev/sdf3
           6       8       99        6      active sync set-A   /dev/sdg3
           7       8      115        7      active sync set-B   /dev/sdh3

           9       8      147        -      spare   /dev/sdj3

I've noticed that write speed is just terrible, not even close to SSD performance.

# dd if=/dev/zero of=/tmp/testfile bs=1G count=1 oflag=dsync      
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 16.511 s, 65.0 MB/s

Read speed is fine though

# hdparm -tT /dev/md2

/dev/md2:
 Timing cached reads:   20240 MB in  1.99 seconds = 10154.24 MB/sec
 Timing buffered disk reads: 3478 MB in  3.00 seconds = 1158.61 MB/sec

After doing some troubleshooting on the issue, I found out that probably I've messed up the storage configuration initially: X10DRW-i has Intel C610 which has two separate SATA controllers, 6-port SATA and 4-port sSATA. So disks in the array are connected to different controllers, and I believe this is the root cause of poor performance. I have only one idea of fixing this: installing PCIe SAS controller (probably AOC-S3008L-L8E) and connecting SSD drives to it.

So I would like to confirm the following:

Am I right about the root cause, or I should double-check something?

Will my solution work?

If I reconnect drives to new controller, will my RAID and data survive? My research shows that yes, as UUIDs of partitions will remain the same, but I just want to be sure.

Thanks to everyone in advance.



Related Questions



Disable hardware RAID controller

Updated May 23, 2017 10:00 AM

HDD compatibility to rebuild for RAID

Updated April 03, 2015 22:00 PM


write hole: which RAID levels are affected?

Updated April 16, 2017 21:00 PM