SSD and speed

Issues related to hardware problems
Post Reply
linsoft
Posts: 3
Joined: 2019/04/29 09:56:46

SSD and speed

Post by linsoft » 2019/04/29 10:17:48

Hi everyone, am posting this to see if anyone has had the same issue and overcome it.

First of all, I am not an expert with CentOS, I have used this OS to accomplish a task and it has become the best thing we have ever done as a company.

We are using HP DL380 Gen9 and Gen10 servers with silver specification CPU, a full loaded SAS 40 tb RAID 1+0 and 2 x SSD drives as we were under the impression that the SSD drives could process our data quicker.

To cut a long story short it appears the SAS drives and the SSD drives are running at identical speeds in both real world tests and hdparm tests. We also bought NMVe drives and tried those, again the speeds were identical in CentOS 7.

At this point we thought it was the RAID card, however we bought a very meaty gaming machine (lucky me) and that also showed that the speeds of the standard SATA SSD and NMVe drives were again identical, however the faster CPU highlighted a 60% gain in this speed.

So the question is, does CentOS 7 bottleneck the speeds of SSD drives? We have tried all the suggestions of optimising the mount and have seen a few percent increase but nothing really that significant.

For the background, I used a standard partition set of 4 for the SAS drives and mounted in the SSD as Ext4. Changing the formatting or mounting method of the SSD makes no difference in our real world tests.

/boot (Ext4)
/bootefi
swap
/ (xfs)

Our system uses a lot of Disk IO, probably like no other system that uses CentOS so suggestions on how can we increase the speed would be really welcome. At the moment we are probably going to upgrade the CPUs as an easy upgrade route.

User avatar
TrevorH
Forum Moderator
Posts: 25847
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: SSD and speed

Post by TrevorH » 2019/04/29 12:04:19

No.

It would appear that you have some sort of set up problem though since you post no numbers, it's tricky to tell that you have a problem at all.

There are some proviso's on that. A decent RAID controller will have cache, sometimes lots of it. That cache, if it is battery backed or nvram backed, will perform at least as well as an SSD itself (since that's pretty much what it is).

So what methods are you using to measure the speed and what results are you getting?

On a decent machine with PCIe 3.0 and an NVMe drive installed you should be seeing read speeds approaching 3.5GB/s (depending on the exact model of SSD in use) and write speeds, which are more likely to vary from model to model, of maybe 2.5GB/s.
CentOS 5 died in March 2017 - migrate NOW!
Full time Geek, part time moderator. Use the FAQ Luke

linsoft
Posts: 3
Joined: 2019/04/29 09:56:46

Re: SSD and speed

Post by linsoft » 2019/04/29 12:13:25

Thanks for the reply.

Will get some figures up tomorrow, not at my desk today, really grateful for any help and we strongly suspected the RAID card until we started out with the NMVe drives which go in the PCIe slot, not in the RAID.

Its a really weird one, saying this, the system absolutely flies, its lightening fast. Its just one of those things you look at to improve.

linsoft
Posts: 3
Joined: 2019/04/29 09:56:46

Re: SSD and speed

Post by linsoft » 2019/05/03 12:13:00

Just a quick update.

I believe we are not looking at slow SSD drives, they are consumer ones, not enterprise however they are working alongside SAS enterprise drives with RAID 1+0.

I think what we have here is the drives are actually running as fast as they possible can, we are not looking slow SSD but really really fast SAS.

We have considered using enterprise SSD however NMVe drives should work faster but simply don't. We will probably be buying enterprise SSD just to see, however my gut feeling here is that upgrading the CPU in the server (they are bottom range Silver Xeon CPUs) to something faster would be better.

A bit of information we have over 20 servers in different sites, one lost its RAID battery some time back. RAID 1+0 on these enterprise SAS drives meant the server that lost its battery appeared to be running as fast as the counterparts with RAID battery.

We monitor speed on our servers with real world tests, ie how fast our servers are processing the work they should be doing in a simulated environment...need to get some proper figures up here so others can see our issue and the potential answer.

User avatar
TrevorH
Forum Moderator
Posts: 25847
Joined: 2009/09/24 10:40:56
Location: Brighton, UK

Re: SSD and speed

Post by TrevorH » 2019/05/03 16:17:00

You really need to quantify your numbers as currently we cannot tell if things are working normally and it's your expectations that are wrong (or vice versa).
CentOS 5 died in March 2017 - migrate NOW!
Full time Geek, part time moderator. Use the FAQ Luke

NedSlider
Forum Moderator
Posts: 2889
Joined: 2005/10/28 13:11:50
Location: UK

Re: SSD and speed

Post by NedSlider » 2019/05/11 09:12:09

I agree with Trevor, it's impossible to add meaningful input without some numbers.

fio will give you some pretty reliable raw numbers as to what your device(s) are capable of and will help identify any underlying issues.

Sequential read speeds at a queue depth of 32 will saturate the device/bus and give a good reliable reading of max throughput:

Code: Select all

fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=read --size=500m --io_size=10g --blocksize=1024k --ioengine=libaio --fsync=10000 --iodepth=32 --direct=1 --numjobs=1 --runtime=60 --group_reporting
A SATA bus is limited to 600MB/s so max speeds of 500-550MB/s are to be expected. NVMe is limited to 1GB/s (less as overheads) per PCIe lane in use, so realistically ~1700MB/s for x2 PCIe and ~3500MB/s for x4 PCIe lanes are achievable with the latest NVMe drives.

A Random 4K read test will give a good indication of a devices speed for random access of small files (e.g, database read speeds):

Code: Select all

fio --name TEST --eta-newline=5s --filename=fio-tempfile.dat --rw=randread --size=500m --io_size=10g --blocksize=4k --ioengine=libaio --fsync=1 --iodepth=1 --direct=1 --numjobs=1 --runtime=60 --group_reporting
Older SATA based drives will typically see performance ~20MB/s and the latest NVMe drives will hit 50-60MB/s

Post Reply