Jump to content
Corsair Community

Why not FW-RAID.


Recommended Posts

Quotes from Wikipedia:

 

Hardware-based RAID:

 

Hardware RAID controllers use proprietary data layouts, so it is not usually possible to span controllers from different manufacturers. They do not require processor resources, the BIOS can boot from them, and tighter integration with the device driver may offer better error handling.

On a desktop system, a hardware RAID controller may be an expansion card connected to a bus (e.g. PCI or PCIe), a component integrated into the motherboard; there are controllers for supporting most types of drive technology, such as IDE/ATA, SATA, SCSI, SSA, Fibre Channel, and sometimes even a combination. The controller and drives may be in a stand-alone enclosure, rather than inside a computer, and the enclosure may be directly attached to a computer, or connected via a SAN.

 

Firmware/driver-based RAID:

 

A RAID implemented at the level of an operating system is not always compatible with the system's boot process, and it is generally impractical for desktop versions of Windows (as described above). However, hardware RAID controllers are expensive and proprietary. To fill this gap, cheap "RAID controllers" were introduced that do not contain a dedicated RAID controller chip, but simply a standard drive controller chip with special firmware and drivers; during early stage bootup, the RAID is implemented by the firmware, and once the operating system has been more completely loaded, then the drivers take over control. Consequently, such controllers may not work when driver support is not available for the host operating system.

 

99% of the consumer-electronic motherboards contain Firmware/drive-based RAID so the first quote is a bit confusing, that's why i underlined "may be"... but very not likely except very highly expensive. :!:

 

a real hardware-raid controller has a processing-unit and a cache, not only a single chip that contains a "instruction set" for the systems cpu and memory.

 

you easily can see what is a real hardware controllers, if you simply watch one of them:

 

example:

 

LSI 3ware Escalade 9650SE-4LPML bulk, low profile

4x SATA II

RAID 0/1/5/10/JBOD

256MB PC2-533 ECC DDR2

cheapest price found (in euro): 234,80

 

attachment.php?attachmentid=12721&stc=1&d=1369265820

 

So no, even the most expensive high-end consumer motherboard does NOT contain a hardware-raid controller.

 

Using FW-RAID decreases CPU free cycles/Memory-Subsystems available bandwith more a lot, especially for multiple SSDs, then times ago with HDDs.

 

Now lets just say/guess from a 5400rpm HDD crysis 2 takes 300 seconds to load a level, 1 SSD could do it in 30 seconds, raid0 can load it with 2 SSD in 20seconds and with 4 SSD in 10seconds. is that worth it? probably yes :D:. but, now lets assume you started playing and the game needs to load something "inbetween the action" from your raid, that could drastically drop framerate and likely on one or another point produce heavy micro stutter :eek:, lets say when entering a new map area or a building, when that would not happen with a single-non-raid drive on less powerfull cpu.:cool: in worst case it can produce permanent micro-stutter when the game use caching ongoing like i.e. Rage. :mad:

 

you are not not gaming but instead want to just high speed write lots of data your system processed? then to you applies still the FACT that writing the data using a FW-RAID decreases the speed your processor can process the data ;):

 

for example video encoding. lets assume your high end cpu/encoder creates a big low quality file at a rate of 100mb/s using 100% cpu-load. 1 HDD is likely not enough so cpu would idle, 1 SSD is enough, but any FW-Raid will cause it to write quite slower than 100mb/s, maybe 90mb/s or maybe 80mb/s, or just maybe 95mb/s... or maybe just 20mb/s, nobody can expect how the special ssd with the special raid FW operates under high cpu-load.

 

well somebody might stand up and tell us that he never experienced any cpu-stutter when RAIDing with the motherboard... or that raid0-ssd setups "feel snappier" :roll: i dont really care. FW-raids are window-dressings/eyewash - you're really better off without using them ;): im sry to destroy some dreams, with to say that you prior need an appropiate real hardware controller, instead of just only 2 drives to take use of a raid without harming the system experience on the other hand. probably those people read this and still ignore me. no problem, you can have your opinion

224593.jpg.1949e393f0581b9eb65224645458e87b.jpg

Link to comment
Share on other sites

Another reason not to use a raid0 with SATA is link speed, what is limited to 600megaBYTES per second (resulting from a 8B/10B traverselling 6gigaBIT per second BUS; SATA III).

 

Actual SSD soonly knock the bus limit and they are not far from it on today.

 

If you need more speed you HAVE TO use a PCI-Express-Card-mounted-SSD, so even a hardware-raid0 of 4x samgsung 840 pro would be nothing more but a risky eyewash. :!:

Link to comment
Share on other sites

other then the fact this seems to be an obvious troll, I will nibble the bait a little.

 

You are right it is always better to go with a Hardware Raid controller over the standard Software Raid controller you get on most main stream mother boards. Some of the high end server motherboards do in fact have LSI SAS controllers on them, and that's why they cost thousands of dollars.

 

I'm not sure what point your tring to make with your post other to inform people that their software raid controllers are inferior to the hardware versions, but most every person is not going to go out and drop $600us dollars on a lsi raid controller and then have to buy the sas to sata cables. It's just not in most peoples budgets.

 

Now if it's speed your looking for there are much better solutions then spending tons of money on raid cards, you can in fact get very high reads and write performance with a standard software raid controller a couple ssd's and a ram cache. It is far less expensive and much faster.

Example here:

 

I have use LSI, Highpoint hardware raid controllers, several of the intel ichor line of software raid controllers, as well as marvell's and I can honestly say in my opinion there is no need for the hardware controller for and everyday user/gamer that you can't accomplish with the ramcache ssd combo. Not to mention you'll have and extra $600 to buy games with or an extra graphics card or more ram or more ssd's or more monitors that will far more improve your gaming/everyday use experience over a hardware raid card.

Link to comment
Share on other sites

who here with a ssd-raid0 bought a SAS-controller???

 

and i was ONLY reffering to ONLY FIRMWARE-Raid, so that thousand of bucks LSI-chipped motherboard doesn't count.

 

All i said was: In my opinion using that (FW-Raid Sata) is pointless at all, because A) it consumes cpu/mem-bandwith B) the controller can only deliver up to 600mb/s and C) it DECREASES write-speed

 

If you have any reason against that, please tell me.

 

These are facts, it's simply more or less pointless to run a raid0 with SSD, when not spending lots of money.

 

A Force 3 is capable of 550mb/s... so whats the reason in raid0, when you can have faster drives than the force 3 today???

So Raid0 on SATA is more risk than of any benefit.

 

People using that still will high likely dont feel a difference (excepet on benchmarks) but high likely run into troubles on the next day.

I started this thread to encourage not run into troubles.

 

Are you sure it is of no difference when the cpu load is 99%? ;):

 

I already said that you would need PCIe cards ;):

Link to comment
Share on other sites

who here with a ssd-raid0 bought a SAS-controller???

Any good hardware raid controller will have sas conections on the controller, and you will need the cables to hook to the sata drives.

and i was ONLY reffering to ONLY FIRMWARE-Raid, so that thousand of bucks LSI-chipped motherboard doesn't count.

you refered to hardware and firmware raid controllers. What you call a firmware controller most refer to as a software controller since a firmware is a piece of software code.

All i said was: In my opinion using that (FW-Raid Sata) is pointless at all, because A) it consumes cpu/mem-bandwith B) the controller can only deliver up to 600mb/s and C) it DECREASES write-speed

 

If you have any reason against that, please tell me.

 

These are facts, it's simply more or less pointless to run a raid0 with SSD, when not spending lots of money.

So is it your opinion or fact? Why is it my software controller hits over 1gb/s with only 2 performance pro ssd's then if 600mb/s is max? Maybe if you run an pentium3 you have issues with cpu, I would imagine if your able to drop enough cash on 2 ssd's to raid them you could afford a decent I7 cpu at 200bucks.

A Force 3 is capable of 550mb/s... so whats the reason in raid0, when you can have faster drives than the force 3 today???

So Raid0 on SATA is more risk than of any benefit.

 

People using that still will high likely dont feel a difference (excepet on benchmarks) but high likely run into troubles on the next day.

I started this thread to encourage not run into troubles.

 

Are you sure it is of no difference when the cpu load is 99%? ;):

 

I already said that you would need PCIe cards ;):

 

The advantage of raid0 on ssd's isn't the speed its the storage and speed. It's much cheaper to buy 4 smaller ssd's and raid0 then to buy one big one. Maybe you have infinite money so it's no concern to you but most normal gamers are penny pinchers. I have been running raid0 on all my pc's since the 75gb WD Raptor drives came out and never have had reliablility issues. Except with the corsair nova ssd's and I think it was the controllers faults since it had no issues after an firmware update or with platter drives. If your having issues with raid0 the next day, the problem is likely in your product or it is being set up wrong.

 

In the end, theres nothing wrong with any way someone decides to accomplish their goal. They can run 1 disk, 2, 3, 4, 9 if they want, and in any combo of raid configurations they chose. that's the great thing about it, they can pick and choose what works best for them. If you want to help people decide make and objective post of all the configurations and what the benafits and negatives of each are. Do some testing and document it to back up what you post otherwise it's not fact, just an opinion.

 

I have been running 2 corsair performance pro3 ssd's in raid0 for over a year I have never experienced an issue. I typically play a pc game, encode my xsplit stream to the highest quality and then stream it to twitch all live in real time and have never had a stutter issue ever. Guess I'm just lucky according to your facts tho.

Link to comment
Share on other sites

Any good hardware raid controller will have sas conections on the controller, and you will need the cables to hook to the sata drives.

 

SAS is NOT SATA! Even SAS still operate at 6gps = 600 MB/s and has future suppport for 12gbps = 1,2 GB/s. BTW: This is about FW-Raid, dont you get it. Private end-consuments with normal motherboards DONT have ANY hardware controllers at all.

 

you refered to hardware and firmware raid controllers. What you call a firmware controller most refer to as a software controller since a firmware is a piece of software code.

There is a difference between Software raid and Firmware-raid. A firmware beeing a software in your simple mind is NOT the criteria. Learn it! Software raid is setup in the OS purely you dont need ANY chip.... FW raid on a FW-chip ONLY. I only refer to FW-raid, this topic is about WHY NOT FW-RAID, to always choose Hardware-Controller when experimenting with raid. Again private end-consuments DONT have Hardware-Controller... this thread isnt meant for server-administrator operating with ultra-high-cost systems who know what they are doing.

 

So is it your opinion or fact? Why is it my software controller hits over 1gb/s with only 2 performance pro ssd's then if 600mb/s is max? Maybe if you run an pentium3 you have issues with cpu, I would imagine if your able to drop enough cash on 2 ssd's to raid them you could afford a decent I7 cpu at 200bucks.

Because in your Benchmark program (ATTO) the CPU by the instruction set of your FW-raid reads compressed data at 600mb/s from the SATAIII-controller and writes it into memory at 1GB/s. NO it is not a opinion. It is mathematical fact! If you believe that is achievavle in real-word, then you are a fool. If you run a raid0 with your SSDs on a FW-raid chip then you are a fool by the facts in my opinion. ;):

 

The advantage of raid0 on ssd's isn't the speed its the storage and speed. It's much cheaper to buy 4 smaller ssd's and raid0 then to buy one big one. Maybe you have infinite money so it's no concern to you but most normal gamers are penny pinchers. I have been running raid0 on all my pc's since the 75gb WD Raptor drives came out and never have had reliablility issues. Except with the corsair nova ssd's and I think it was the controllers faults since it had no issues after an firmware update or with platter drives. If your having issues with raid0 the next day, the problem is likely in your product or it is being set up wrong.
Thats not True, 2 drives of half the size than 1 big one is MORE expensive. You want examples? I refer to only SSD, you are good to run your HDD is raid0 with FW-raid, but a SSD does not benefit.

 

i.e

4x s. 840 pro 128GB each around 110 euro = 440 Euro for 512 GB

each reads: 530MB/s, each writes: 390MB/s

 

2x s. 840 pro 256GB each 200 euro = 400 Euro for 512GB

each reads: 540MB/s, writes 520MB/s

 

1x s. 840 pro 512GB = 390 Euro

each reads: 540MB/s, writes 520MB/s

 

That drive in 256/512GB has QUITE better specs than the 128GB. And for an end-consumer @FW-raid there is no way in exceed 600MB/s on the SATAIII-Bus. If you dont know the difference between Gigabits and Gigabytes and resist to mathematic, thats not my "opinion", it is fact.

 

You might think the 10 EUro is worth it... probably yes if you have a Hardware-Raid controller, but NO if you have a FW-chip.

 

I have been running 2 corsair performance pro3 ssd's in raid0 for over a year I have never experienced an issue. I typically play a pc game, encode my xsplit stream to the highest quality and then stream it to twitch all live in real time and have never had a stutter issue ever. Guess I'm just lucky according to your facts tho.

 

on a "real" hardware-controller or on your "cheap" motherboards FW-chip, if last: mwhahaha, thats fool. you never had to reinstall from scratch?

I already said you can have your opinion non based on fact and you can claim "all is nice" whatever you want. . . while you secretly bug your GPU-vendor stop the micro-stuttering xD You have no idea what you are talking about!

 

 

Here is some benchmark of SAS/SATA-raid0

http://www.tomshardware.de/SAS-RAID-Controller-PCIe,testberichte-239792-11.html

And these are PCIe already!

 

See uncommpressible data transfers 600mb/s is limit, what ever peaks higher is compressed. My force 3 nearly beats that alone! Dont you get that????

 

Sry you had no argue!

Link to comment
Share on other sites

so this will be my last reply to your obvious troll, but the fact you refer to tom's hardware tells everyone here all they need to know about your facts. Also your Hardware controller has a firmware on it, Software is software reguardless of what you want it to be, and if you get a crappy one like a highpoint rocketraid they don't release new firmware for their cards they make you buy a new card to get new support for ssd drives. I speak from real life experience, not what I read on a biased web site, that pushes whatever the donor of the product told them too. No I haven't reinstalled since I got the performance pro ssd on my intel southbridge x79 software raid controller.

 

I'm not arguing the fact that hardware controllers aren't better, I said that's truth and you are right. I simply stated theres no point in spending $600 on a raid controller when that cash can be used in another more useful way to make your computing experience better. I guess I will keep my Foolish money and keep rocking great benches on my busted pc have fun wasting money.

Link to comment
Share on other sites

Sry, but a Hardware-Controller having an internal firmware doesn't make it a FW-controller, because the OS doesn't need to support that Hardware-Controller firmware.

 

Again these are the differences:

 

Hardware-RAID-Controller:

Reports raid-drive to the BIOS.

BIOS can boot that DIRECTLY.

CPU does NOT read/write data.

Single drives are NOT available through BIOS.

 

FW-RAID-Controller:

Reports nothing to BIOS.

OS needs drivers, then it can boot that.

CPU does read/write data.

Single drives are available through BIOS. (i.e. to *TRY* a FW-Update)

 

Software-RAID-Controller:

OS boots MBR from a single-drive.

It does not need drivers.

CPU does read/write data.

Single drives are available through BIOS.

 

The point is, when not spending $600 for a Hardware-RAID-Controller, don't spend it for multiple SSDs either. Get the biggest drive available or buy a PCIe-Card! Thats my recommendation.

 

Maybe your high-end Intel CPU may solve the SATA-read/write bursts while executing your application without you even notice whats going on.

 

That doesn't mean it doesn't happen.

 

Good for you if you had not to reinstall from scratch by now. Anyhow when a driver-update is faulty or one of your SSDs fail your data is gone. When you want to start a Boot-CD your raid0 isn't recognised. If you feel fine that way, good for you. I would not.

Link to comment
Share on other sites

You already said that and i proofed you wrong, now you try that here again, still without a proof of that claims.

 

http://forum.corsair.com/forums/showthread.php?t=118795&page=2&highlight=600MB%2Fs

 

Again that is wrong, the bus max speed is 600MB/s overall... everything other if software/firmware based read/write from the cpu, we can repeat this on and on if you like.

 

The CPU uses caches and hit-rates, these probably deceive your internally re-running benchmark.

 

One port is capable of 600MB/s, that is right... still the hole bus isnt able to do more, that means any drive on a port can reach 600MB/s, still the controller cannot deliver lets say 2,4GB/s if you want to copy from 4 drives to another.

 

That is not inside the spec, SRY!

 

To be SATA3 certified a controller must be able to transfer 600MB minimum.

 

ATTO showing more is compressed data that the CPU writes to the memory at higher rates, leaving the SATA3-certified-Controller @ max 600MB/s.

That working and showing also good rates on Intels Sandy/Ivy is "Intel" doing more on their own... not the SATA3 spec requiring them to do that.

 

"Intel" is one of the chip-corporation on the market that is able to virtualise a hole specification within a quite more upper classed own propriety solution, so what ever you may be right if you would say Intel Z68/Z77 is the best FW-raid you can get, but no you claim the SATA3- specs lets say demand from any Controller with 8-ports to be able to deliver 4,8GB/s. I just claim even the best one aren't able do deliver 1,2GB/s real-life sequential write, especially not with a regular sandforce-mix.

 

What Intel is able of is not the general case on the market!

 

Bus is a technical word with a clear meaning.

Limit is a technical word with a clear meaning.

 

You bought the SATA-Spec documents?

 

If you have a proof, that reclaim it in detail and not posting the link of a huge wikipedia article, because:

1. Anyone can post to wikipedia, maybe you manipulated your own claims.

2. The page is dozes of pages long... if you found a point (other than one drive per channel)...tell us where it is in detail.

 

Again one drive beeing able to read 600MB/s per port does not mean the controller has a BUS-limit multiple times of the ports. If you believe that, than you are fool.

 

ludicrous claims that have nothing to do with SATA spec!

Proof? I already showed you the "original" specs that clearly "claim" 600MB/s BUS throughput limit.

Link to comment
Share on other sites

  • 1 month later...

What you keep repaiting is a sata-bus from disk to controller, in other words; point to point transfer protocol, as stated in the SATA 3 specifications.

 

What you fail to see is that on all those controllers, even motherboards, you have more sata 3 connections; each connection capable of transferring sata 3 speeds, as officially specified.

 

So, what is holding you back, in theory ? (or us)

 

1) the speed of the controllers PCIe connection (More lanes = more bandwidth, Higher PCIe number = higher bandwidth)

2) The raw speed of your harddisks/SSD

 

Other factors can be compatibility and quality of the firmware/drivers.

 

 

Read this:

The Adaptec RAID Series 5 family utilizes industry-leading dual core RAID on Chip (ROC), 512MB of DDR2 write cache (ASR-5405 has 256 DDR2 memory), and connectivity with the latest x8 PCI-Express to deliver over 250,000 I/Os 1.2GBs

 

Can you reas? Nope, it is not 6 Gbit total for the controller to acces the motherboard, it is 1.2 Giga-BYTE! per second!

 

That is the bus that matters most, because ofcourse; with more harddisks/SSD's attached to the controller by Sata 3 cables "bus" the controller needs faster acces, and thus, busspeed, to the main system. "they" do that by enhancing the PCIe-connection.

PCIe x1 is too slow for highend controllers or even FW-controllers. So they use more lanes and to give the controller the bandwidht it needs.

 

Who is stupid and in deniel ?

 

ps: Not much more then a simple Firmware raid-controller: http://thessdreview.com/our-reviews/highpoint-2720sgl-rocketraid-controller-review-amazing-3gbs-recorded-with-8-Crucial-c400-ssds/

and see what this thing delivers. Not so special huh? Nice; you see immediatly the impact of a PCIe x1 vs PCIe x8 connection ;)

 

You know what i think? We pay way too much for a tech product, which was developed many years ago. Pity there are no attractive consumer product, with cache and a nice price. Because it is mostly the cache nowadays that really makes the difference, even more with harddisk and their bad small-file performance.

Link to comment
Share on other sites

i have 2 840 pros in stripe

and when i game, i can tell those who are on a 5400rpm drive, a 7200rpm drive

those with an ssd and those who have striped ssds, it is a visible difference in performance

 

if you think there is no difference or its unnecessary

good for you, i am happy you can justify whatever you want to yourself

 

as for which type of raid (f/w, s/w, w/e) it all makes a difefrence, get whatever you can afford

if it comes with your mobo, hey..bonus

if you need an add in...go get one

if you are a beginner and are happier with a gui...start with that one and be more comfortable with your system

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...