Jump to content
Corsair Community

Why does my 5 SSD raid-0 array suck?


barza

Recommended Posts

I have 5 X64 SSDs in a raid-0 array, using the ICH10R controller on an X-58 motherboard. I have followed the advice in the threads here, (128k stripe size on the array, let win 7 set up the partitions on install then quick format with 64k allocation).

 

But after doing a secure erase on each of the disks and recovering an acronis back up of my system, I am getting pretty poor benchmark results and the system does not feel as fast as I hoped it would. My atto benchmark is below (running win 7 64 bit). I have also done the "Tony Trim" procedure from the another forum (I hope I am allowed to mention that) and no change.

 

I'd appreciate any advice on how to get this array working properly (it should be maxing out what the ICH10r to PCI link can do at around 750mb/s, I think).

 

My array was initially 3 drives then I used Intel storage manager in windows 7to add another two drives to the array, I wonder if that has anything to do with it? I really do not want to have to do a fresh windows install if I can avoid it.

atto.jpg.d3db33ba12e437b420c43af90082138a.jpg

Link to comment
Share on other sites

Looks pretty good to me. I suspect you're pushing all sorts of boundaries here, shifting the bottlenecks around like mad - memory, CPU, ICH10R etc.

 

As far as I know, the stripe size determines the potential overlap between disk drives - so if you write a file of less than 128K in your case, you can only possibly hit one of your 5 disks, whereas if you had a stripe size of (say) 32k, your 127k file would hit 4 of your disks. Your ATTO tends to support this theory, because your speed only really starts to crank up above the 128k block-write-size.

 

I don't think the allocation will show up in an ATTO benchmark, because, again as far as I know, ATTO starts by allocating a big file, then writing into and reading from it in successively larger chunks. But the allocation gets done before the work starts, I think. Still, increasing the allocation size is really only useful for the avoidance of fragmentation, which is less of an issue with SSDs.

Link to comment
Share on other sites

You're not going to get anything much bettert han that. The Intel I|CH is physically limited to around 625-650MB/s because of the amount of PCIe lanes it is connected to.

 

In genberal terms, 3 SSD drives will virtually max the read capacity of the ICH and 4 should just about max the write capability. Adding more drives than this will result in zero gain apart from capacity.

 

The reason the transfers are so low before 32K is because you haven't enabled the Intel Write back Cache in Intel Matrix Storage Console. Make sure you have the latest Intel Matrix Storage drivers (version 1023) and restart before enabling the writeback cache in the Matrix console. Your stripe size is fine.

 

Atto has nothing whatever to do with allocation/cluster size. The figures Atto is showing refer to file sizes being shifted around, ie the 16K test transfers x amount of 16KB reads/writes.

 

You should not have to do a full install if you have Acronis. One thing to take into account is was the image you are using created from the RAID array originally or from a mechanical drive? if it was a mechanical drive your SSD's partition alignment will be off. If it was an image from SSD's then it's fine.

 

Before you use HDDErase first boot from your Windows Installation CD and use diskpart to select each drive individually and run the "clean" command on them. Now use HDDErase to SE each drive in turn, using option 1 Standard SE. I don't know exactly why the diskpart step is needed, but I have also used HDDErase before without diskpart clean and did not get recovered performance.

 

For your array of 5 disks, the only way to get the max possible performance is to use a hardware RAID card. One of these will set you back about £100-150 and will ideally use a minimum of 4x PCIe 2.0. Make sure the card has a decent amount of cache (128-256MB) and use a balanced stripe size, between 128K and 1MB depending on usage.

Link to comment
Share on other sites

Not sure about this, but I thought that having the "Direct I/O" checkbox ticked in ATTO made it bypass ALL cache mechanisms, windows, driver and drive, thus testing raw disk speed. If you untick it, you get some insane results that make you feel really good if you believe the numbers!
Link to comment
Share on other sites

Atto is not capable of bypassing software created cache which the Intel Matrix Writeback Cache utilizes. I don't even think it can bpass hardware cache.

 

Since Windows XP, programs have been virtually "banned" from accessing hardware directly.

 

I attach two pictures, one with Write Back Cache off and one with it on. Note how the small transfer ATTO tests leap up by 10 fold + with the cache enabled. Note the smallest transfer has shot up to over 50MB/s with the cache enabled, up from single digits.

 

http://img269.imageshack.us/img269/9475/capturerpg.th.jpg

 

The OP almost definitely does not have the write back cache enabled, possibly even doesn't have Intel Matrix Storage installed.

 

Unfortunately this will not change the fact that one will only see increased read speed up to adding 3 SSD,'s and write speed up to 4 SSD's. The controller is still limited physically to ~650MB/s unfortunately.

Link to comment
Share on other sites

Wel, well, well. Thank you for that, Psycho101. It must just bypass the Windows cache. This amount of cache would certainly gobble up most of the slowdown caused by lack of trim in raid. Now if only my computer didn't have a crappy Nvidia controller! Suppose I could use the other one, bu that's jmicron, ffs. I feel more expense coming on.
Link to comment
Share on other sites

Psycho - since you seem to have a good understanding of the WB cache - can you explain when it is good to have it turned on and when would be a bad idea? OS / Storage? SSD / Spinning platter?

 

Thanks in advancei f you have the time.

Link to comment
Share on other sites

I would have the Windows cache (accessed via device manager) on all the time for all drives unless you wish to use quick removal. With certain cases and caddies one can interchange drives without powering off. Disable the cache and as long as it's not a system drive you can remove it any time you w********

 

As for the Intel Matrix Storage Write Back Cache, I would also enable it regardless of SSD or HDD. This cache is only available if you have a RAID array and then is only implemented on the array(s) not any other disks that are in JBOD/Non Member RAID.

 

The thing to remember with HDD caching is that the purpose of the cache is to store reads/writes in RAM and to write them to disk at a later data (mayne milliseconds later) this allows Windows and programs to basically "lie" to the drive and report an operation as completed even though it has not. If pulled unexpectedly, data loss could result as the data in the cache never makes it to disk. The internal caches of HDD's and SSD's including the X and P series are safe though. There are capacitors in SSD's usually with enough oomph to be able to flush the cache to disk when power is interupted. If using caches though always use the "Safe removal" feature".

 

@ Cadencia: Yep, the NV controller isn't brilliant but is much better than the JMicron one. I have found the best performance using AHCI and the Windows Vista/7 default drivers. rather than dump the board, considder a cheap HW RAID card. You can get one with a small amount of cache on it for about the same price as a new motherboard and the performance is well worth the price premium. :idhitit:

Link to comment
Share on other sites

Psycho101 and Cadencia - thanks for taking the time to reply.

 

Psycho101 - The Acronis image has come from a win 7 install that has always been on an SSD raid. This was initally three X64 drives, then I added another two, and your explanation ties in with why performance hasn't gone up. Could you recommend a couple of example of hardware raid cards in the price range you suggested: most of the ones I have taken a look at are a bit more expensive than £100-150, so I must be missing something. Finally, for a system which is basically a gaming rig, what stripe size and allocation unit on format would you recommend?

Link to comment
Share on other sites

As long as the image was created from a SSD install and it is Windows 7, your allignment will be perfect.

 

It seems I under estimated the price of the RAID card you will need. I was looking for a 4 port myself, you will need the next step up which is an 8 port controller. These can run anywhere between £200 and £600 depending on brand and cache size. My personal recommendations would be anything that suits your needs by either Adaptec, Highpoint (RocketRAID) Areca, LSI or well known OEM's like IBM and Hewlett Packard. The IBM solutions are excellent but expensive, plus only a few support SATA, most are SAS only (even though connectors are very similar). It is crucial to get a true Hardware card rather than a software emulation add in card as this will be similar in performance to the ICH10R, not that simulated RAID is bad OFC.

 

That looks like the EXACT same degradation pattern I get on my array too. Between 8K and 64K I get a performance drop of ~30-40%. This is why I use Diskeeper. The Hyperfast plugin works wonders for eliminating this performance drop. Most people think that a SSD doesn't get fragmented, but in a way it does. The hyperfast utilit organises data so that more files are contiguous meaning that the drive has to do less multiple "reads > writes > erases > writes" etc.

 

Give the free trial of Diskeeper a shot, making sure to disable all the auto defrag and "Fraguard" stuff (on the fly always on defrag) and check the program has detected the array as a SSD (will show as a single drive. Select the SSD and hit optimize and in ~4-5 minutes it will have re-jigged the data and the performance should be better.

 

Also the "Tony TRIM technique may have taken its toll on the drive along with the benchmark runs, For me, Tony TRIM is the quickest way to put my drives into a fully "degraded" state (~10%-15% slower than fresh). If Hyperfast doesn't sort out the read speeds, I recommend taking a fresh image with Acronis of the drives as they are now, then taking the drives out of RAID. Boot from the Win 7 DVD and go to "Repair" and then "Command Prompt". From here type "diskpart" > "list disk" > Then select each SSD in turn and type "clean" then hit enter before moving to the next drive. When they are all done, restart and boot directly from USB stick to run HDDErase (I use Version 3.3 because I used to have an Intel drive too). Perform a standard Secure Erase (option 1) on each drive, then re-configure your array with the 128 stripe size etc before restoring the image.

 

Note that for HDDErase to work the SATA ports must be set to IDE mode and with Native mode disabled or the drives won't be found by the program, it will stall after the warning messages.

 

I don't know exactly why using Diskpart first seems to give me better results but there is no way I can deny that it helps. Without doing that first then intermitantly I get zero performance improvement and mostly only a slight performance boost.

 

 

For a Gaming rig I would probably go for a stripe size of between the common 128K and 512K. Having the stripe size too high would cause performance issues due to the data not being split across drives enough. Luckily Game textures etc are usually pretty large, and things like World Of Warcraft which I play regularly will have no issues with a stripe of 1MB or so. Some careful testing would be needed. Luckily you can restore an image each time after increasing stripe size for a quick and easy benchmark run at each stripe.

 

If money is an issue you may actually want to eBay one of your drives. Unless you really need the capacity you shouldn't miss the 5th drive too much and will then be able to use a cheaper 4 channel card.

Link to comment
Share on other sites

Psycho101 - thanks for your reply, much appreciated.

 

I have got hold of a Highpoint 4320 card for £200 which is on its way to me now.

 

I have been using perfectdisk 10 with instructions as per the "Tony trim" procedure. I'll give Diskeeper a try like you suggest and report back (probably wont get to this until the weekend).

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...