Overclock.net banner

Nvme ssd raid0 worth it? (Threadripper)

13K views 72 replies 24 participants last post by  betam4x 
#1 ·
Looks like AMD is going to have a free driver update in a few weeks which will enable bootable nvme raid0 array.

I already have a 1tb samsung 960 pro as my main ssd for os/games.

Will I notice any appreciable difference upgrading to 2 960pros outside of benchmarks?

I really want to make use of all these pci-e lanes the threadripper provides, but I dont really see the need for a second gpu given SLI seems to be falling out of favor.
 
#3 ·
I don't think so. Striping on SSDs doesn't make much sense to me, as we're already at the point of diminishing returns for most folks. SSDs are so noticeably faster than HDDs because of the improvement in access times/IOPs, not sustained transfer speeds (where RAID-0 really helps). For example, the move from SATA to NVMe wasn't a big noticeable jump for me. Sure, benchmarks are great (approximately double or triple the scores), but I wouldn't be able to tell the difference between this NVMe drive and a regular SATA one in regular every-day desktop usage if I was asked to.

If you really hit the disks hard (with several VMs or something) and have a backup strategy in place (RAID-0 means one drive gone = all your data on the array is gone), then go for it. Otherwise I wouldn't bother.
 
#4 ·
There is no real point

If anything overhead will increase and there might be more access time than without
At least longer boot times as the raid needs to be initialized

Like tehmaggot said

Real world difference=0

The jump from HDD to SSD was the most notable because we made a huge jump in access time
NVMe cut the overhead of SATA and provides even smaller times, though one would be hard pressed to feel a difference there

Hate to say it
Bit I never understood why gamers and normal people need all those pcie lanes from HEDT platforms

It's hard to saturate all the lanes using SLI with 2 cards and an NVMe drive somehow at the same time on a normal desktop system

Using all the lanes on HEDT without any professional stuff running might be impossible

Now tuning the RAM speed (and maybe timings) would be worth it of you haven't done so

If you're using 4 or more memory banks then the RAM speeds get cut by a lot
And AMD's infinity fabric depends ohh so much on RAM speed (it's a 2:1 ratio)

And RAM speed also depends on the maturity of the bios as a whole
New bios, maybe faster RAM speed (=better performance)
 
#7 ·
I just setup my new Ryzen build with my first NVMe SSD. I honestly don't see any differences from my old Intel Sata3 SSD. I'd say just get a big 1TB SSD and call it a day.
 
#8 ·
Quote:
Originally Posted by gopackersjt View Post

I just setup my new Ryzen build with my first NVMe SSD. I honestly don't see any differences from my old Intel Sata3 SSD. I'd say just get a big 1TB SSD and call it a day.
That is interesting because Ryzen (AMD) has traditionally had slower I/O when compared to Intel and there are so many people on B350 and X370 who have experienced poor writes on their Ryzen platform. I have done further testing and ATTO at least from what I can tell, works the best to show desired results. AS SSD and other benchmarks show slower than advertised speeds, except the only inclusion I found has been Samsung drives! THe Samsung driver seems to not affect the benchmarks.

I would like to see a screenshot of AS SSD or another benchmark and then have you post the advertised speeds from your SSD manufacturer website, I bet they are slower. They will be close but not quite what is advertised! I bet your writes are much slower than you expect.
 
#9 ·
Quote:
Originally Posted by Jedson3614 View Post

That is interesting because Ryzen (AMD) has traditionally had slower I/O when compared to Intel and there are so many people on B350 and X370 who have experienced poor writes on their Ryzen platform. I have done further testing and ATTO at least from what I can tell, works the best to show desired results. AS SSD and other benchmarks show slower than advertised speeds, except the only inclusion I found has been Samsung drives! THe Samsung driver seems to not affect the benchmarks.

I would like to see a screenshot of AS SSD or another benchmark and then have you post the advertised speeds from your SSD manufacturer website, I bet they are slower. They will be close but not quite what is advertised! I bet your writes are much slower than you expect.
I literally just got the system up and running late last night, and have only been in Windows for about and hour (spent most of the time dialing in my overclock
biggrin.gif
). That being said, I'll run a few benchmarks tonight for you guys showing my SSD speeds. I bought an AsRock Killer X370 SLI/AC because I really liked the M.2 placement to help avoid thermal throttling.

I bought this model:
PNY CS2030 240GB
 
#10 ·
Damn, i was kind of lookin forward to raiding another samsung 960 pro, but if the benefits are negligible....

So what should I do to make use of the extra pci lanes?
 
#11 ·
Here are my scores. This is my old Intel SSD vs my current NVMe drive. The benchmarks show a huge difference, but I'm not noticing and crazy differences. It's nowhere near the jump from a regular HDD to a SATA III SSD. Speed won't be an upgrade, but it was nice adding storage without needing to manage anymore cables. I'd almost consider multiple NVMe drives just for that reason. This is with an R7 1700 @ 3.8Ghz and memory @ 3066Mhz on the latest BIOS on this board.

NVMe


SATA III
 
#15 ·
If we have to fill up those extra lanes, I suppose it comes down to other addon cards. For example, sound and network. I don't see anything specified on your rig for either of these, so perhaps upgrades there may be in order? Sound cards don't need much bandwidth, but they'll consume the slot and provide a noticeable improvement if you're using onboard audio with a decent output.

Depending on what you do on your system, upgrades for these could be worth looking into. I personally would like to set up network bonding on my system (sig rig is way out of date, I haven't bothered to update it in a while) once I move into my new place, and a couple 10gbit NICs for this would make a very nice addition and actually make some quantifiable use of the lanes. This however would require the appropriate gear across the network, and is a decent undertaking.

Edit:
Also, maybe more GPU(s) for mining, if it's still profitable and your power/noise envelope allows? Make a little extra scratch off your system
tongue.gif
 
#16 ·
Well, I am constantly dealing with very large files around 100-120gb. WIndows10 Pagefile often offloads the 100-120gb files im working on if I'm idle for a few minutes to my samsung 960 ssd. But I didnt want to wear out my ssd prematurely, so I ended up buying some hdds, setting them in raid, and redirecting pagefile to save to my hdds instead. I also use the hdds for long term storage of the files.

But it takes a very long time to work on the files now. Loading them up from my hdd takes ages, and whenever w10 starts pagefiling them over to my hdd it makes a ton of noise and also takes forever.

I was thinking that if I purchased 2 more samsung 960 pros, I could raid all 3 of them and just store my large files on my raid0 ssd array, as well as leaving pagefile on the ssd. Since the load would be spread between 3 ssds, the 3x ssds should last at least a few years. Then I could return my hdds and do away with hard drives forever.
 
#17 ·
You would only benefit from a RAID 0 NVME if you were writing/working with Uncompressed 4k+ files, really. Of course, you would blow the endurance out of the water....Massive databases may also benefit. Basically, niche uses out of a niche product.

Quote:
Originally Posted by WannaBeOCer View Post

x399 does not support NVMe FakeRaid. You will be spending $250 for a NVMe raid card.
RAID through the Motherboard is still RAID. There is nothing "Fake" about it. At least with Intel, it is, I don't think I've ever seen the AMD equivalent...
 
#19 ·
Quote:
Originally Posted by DVLux View Post

RAID through the Motherboard is still RAID. There is nothing "Fake" about it. At least with Intel, it is, I don't think I've ever seen the AMD equivalent...
Its firmware raid, 90% of consumers motherboards don't have a separate raid controller.

FakeRaid(Firmware raid) on x399 does not support NVMe raid. He will need a controller or use software raid.
 
#20 ·
Quote:
Originally Posted by happyluckbox View Post

I could raid all 3 of them and just store my large files on my raid0 ssd array, as well as leaving pagefile on the ssd. Since the load would be spread between 3 ssds, the 3x ssds should last at least a few years. .
well

in theory yes
expected lifetime of Samsung SSD's is already very high
even under enterprise conditions

however
RAID 0 is a riskier proposition

you could in theory RAID many drives together in mode 0, but if one drive fails it all goes belly up

if your thinking of putting several drives (more than 2) in a RAID, wouldn't RAID 5 be better?

if one drive fails, the array would still would work instead of just failing
in a RAID 5 array your getting most of the write/read improvements from a RAID 0, but not taking the risk
including what you are actually looking for

distributing the write cycles among all drives

only drawback, if your board actually offers the option, would be a loss in size
as the the data to rebuild the array if one drive fails needs to be spread on all drives

all that being said

you could also just set the temp folder of the apps your using to one drive
and the page file to another

it may not be perfect, but it would be better at distributing load among all drives instead of default C:
 
#21 ·
how much life expectancy can be expected out of these drives? Some sources seem to say hundreds of TB while others say less than 100.

with my current system only running a 240GB ssd i am constantly downloading, uninstalling, and redownloading games. thanks to my gigabit internet i dont give it a second thought, and i think running only 8GB of ram is also going to cause increased wear on my ssd.

i know this is bit off topic, but i was also considering raid 0 for my next build.
 
#23 ·
If you want RAID, my advice would be to buy a hardware RAID card. There are now M.2 RAID solutions. Highpoint has well. Intel also offers a RAID 0 PCIe x8 SSD, but ti's very expensive.

Even better (IMO because it gets rid of the need for RAID), consider buying a PCIe x8 SSD. The Samsung PM1725 is an example. They are not cheap, but they do have the advantage of being native PCIe x8 rather than a RAID solution.

Quote:
Originally Posted by happyluckbox View Post

Well, I am constantly dealing with very large files around 100-120gb. WIndows10 Pagefile often offloads the 100-120gb files im working on if I'm idle for a few minutes to my samsung 960 ssd. But I didnt want to wear out my ssd prematurely, so I ended up buying some hdds, setting them in raid, and redirecting pagefile to save to my hdds instead. I also use the hdds for long term storage of the files.

But it takes a very long time to work on the files now. Loading them up from my hdd takes ages, and whenever w10 starts pagefiling them over to my hdd it makes a ton of noise and also takes forever.

I was thinking that if I purchased 2 more samsung 960 pros, I could raid all 3 of them and just store my large files on my raid0 ssd array, as well as leaving pagefile on the ssd. Since the load would be spread between 3 ssds, the 3x ssds should last at least a few years. Then I could return my hdds and do away with hard drives forever.
The large files might be able to take advantage of the performance of RAID 0. RAID 0's main advantage is in sequential performance. Video editing is a good example.

Games mostly take advantage of what has the highest random performance. You won't see gains in Windows booting either. For games, the only real solution is either 3D Xpoint or a RAMDisk. Both cost a lot more than NAND.

There's always the risk of RAID. It is more likely to fail and the RAID control mechanisms themselves sometimes give problems.

See my response below on endurance.

Quote:
Originally Posted by jaredismee View Post

how much life expectancy can be expected out of these drives? Some sources seem to say hundreds of TB while others say less than 100.

with my current system only running a 240GB ssd i am constantly downloading, uninstalling, and redownloading games. thanks to my gigabit internet i dont give it a second thought, and i think running only 8GB of ram is also going to cause increased wear on my ssd.

i know this is bit off topic, but i was also considering raid 0 for my next build.
Usually the SSDs will last hundreds of TBs. The official warranty however is for a small fraction of that - often less than 5% of the SSD's full life.

The Samsung SSD 840 Pro 256 GB lasted for 2.4 PB (about 2,400 TB) for example.
http://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead

I'd expect that an SSD 850 would last longer. It is using the larger 40nm process in a 3D NAND configuration. Larger NAND has more P/E cycles, although there may be a penalty for V-NAND.

If you are doing something that needs sequential performance, RAID might be worth it. If not, forget it.
 
#24 ·
QUick question, I've heard alot about the DMI being the bottleneck once you go up to 2x samsung 960 pros. If I was interested in going with 3x samsung 960 pros would I be simply bottlenecked by the dmi? I've hard that x399 motherboards for threadripper dont utilize the DMI, so would this dmi bottleneck even be an issue if I were to go nuts with 3x samsung 960 pros?
 
#25 ·
Quote:
Originally Posted by happyluckbox View Post

QUick question, I've heard alot about the DMI being the bottleneck once you go up to 2x samsung 960 pros. If I was interested in going with 3x samsung 960 pros would I be simply bottlenecked by the dmi? I've hard that x399 motherboards for threadripper dont utilize the DMI, so would this dmi bottleneck even be an issue if I were to go nuts with 3x samsung 960 pros?
Didn'r you just answer your own question...?

If two M.2s nearly saturates the DMI uplink; why would you think three would be any different? Only Intel uses such, though.

Each of the M.2 on X399 has it's own x4 uplink to the CPU.
 
#26 ·
Quote:
Originally Posted by WannaBeOCer View Post

Its firmware raid, 90% of consumers motherboards don't have a separate raid controller.

FakeRaid(Firmware raid) on x399 does not support NVMe raid. He will need a controller or use software raid.
NVMe Raid will be supported after September 25th. Whether or not it makes a difference is a different story, but it will be officially supported.
 
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top