Overclock.net › Forums › Components › Hard Drives & Storage › SSD › Samsung 950 pro 512 GB vs Intel 750 400 GB
New Posts  All Forums:Forum Nav:

Samsung 950 pro 512 GB vs Intel 750 400 GB - Page 5

post #41 of 71
Quote:
Originally Posted by epic1337 View Post

corporate with high-end equipment won't be relying on SSDs for their workload, SSDs are fragile to an extent, a server doing continuous writes would burn through it's life within a few short years.
these high-end equipment would instead use multiple arrays of 32slot FB-DIMM compatible boards, they have huge power-backups to sustain their racks for reliability and sustained work even without power from the grid.

I wish you were right in all cases because I'd be using it. Unfortunately sometimes corporate IT takes a penny-wise pound-foolish approach, or standardizes one "workstation" spec and expects everyone to live with it, regardless of the strengths of a given software approach. (In case work is reading: I get it - they can't give everyone every toy they want, and we engineers can be greedy, but it is disheartening sometimes.) We've only recently gotten a lot more stream processors online for time-domain tools that can really take advantage of them.

It'll happen, but outside of a few rabidly technocentric companies....IT departments can be kind of slow to react. I'd be happy using the SSD for the *live* storage space while simulating - then would move it off to safer backed up HDD storage once done. I'm after solution speed not long-term storage on SSD. Right now I have RAM...and SATAIII HDDs.
post #42 of 71
in that case, a penny-pinching corporate would resort to using a reliable "budget" SSD in a redundant raid array.
it will work in a fair way, very wasteful (lots of SSDs destroyed) but cost effective in a sense that investments are at a minimum, cost of operation though is a bit more expensive.
look at backblaze, they're one nasty company that runs on budget harddisks in poor environments by the bulk, they murder harddisks everyday into huge piles of corpses.

on the regard of that, why bother with an expensive SSD, when you can stack a bunch of cheaper SSDs?
or even invest a bit more for a much faster RAM array that has near-infinite write endurance.
Edited by epic1337 - 10/23/15 at 12:24pm
post #43 of 71
Sigh. For someone with a signature that's basically a warning against trolling...you're sure good at telling people "truths" that aren't true.

RAID isn't about speed, at least not in the sense of large-scale EDA simulation speed, it's about data availability. It'll saturate a bus better...but won't leapfrog an array of SATA or SAS drives up to PCIEx4 speeds.

And several 'bargain' SSDs aren't cheaper than one high end, nor is either cheaper than "just pick a standard z420 workstation configuration off HP's website and negotiate purchasing of it exactly as is for all engineers" because "uniformity of support".

Ditto RAM arrays. I'm talking about the workstations distributed to thousands of users. Not some central server farm. (We've tried that approach too by the way - doesn't work when some codes leverage stream processing, some don't, some like hyperthreading, some issue so many instructions the actual and virtual CPU instruction paths get in each other's way, and all of them have to be accessed by the user via a corporate network with network storage assets that throw their own latencies into the simulations...Citrix is the bane of my existence). In rare cases specific programs in specific compute labs can buy better systems but even then, the same "why isn't 'X' good enough for you?" inertia exists.

Please don't try to keep telling me what IT decisions all corporations make when I spend 50+ hours a week staring at the evidence that you don't know what you're talking about for all corporations.

But regardless...we've let this tangent hijack the thread long enough. I'm looking forward for more info on the Sammy 950 pro in the long run, but my next personal build already has an Intel 750 awaiting it.
post #44 of 71
Quote:
Originally Posted by rtrski View Post

RAID isn't about speed, at least not in the sense of large-scale EDA simulation speed, it's about data availability. It'll saturate a bus better...but won't leapfrog an array of SATA or SAS drives up to PCIEx4 speeds.
the RAID isn't meant for speed, its to provide fail-safe redundancy, and a byproduct of giving it some additional speed. no sane company would build a RAID array just for it's speed.
RAID50 or RAID60 will give the most cost effective array while having peak possible speed for SATAIII SSDs, yet can tolerate up to one or two SSD failures on the nested RAID.
Quote:
Originally Posted by rtrski View Post

Ditto RAM arrays. I'm talking about the workstations distributed to thousands of users. Not some central server farm. (We've tried that approach too by the way - doesn't work when some codes leverage stream processing, some don't, some like hyperthreading, some issue so many instructions the actual and virtual CPU instruction paths get in each other's way, and all of them have to be accessed by the user via a corporate network with network storage assets that throw their own latencies into the simulations...Citrix is the bane of my existence). In rare cases specific programs in specific compute labs can buy better systems but even then, the same "why isn't 'X' good enough for you?" inertia exists.
that makes this sort of SSDs more expensive, plus a stand-alone PCIe x4 SSD has no redundancy by itself.

high speed devices like this are more cost effective when pooled, you said "thousands of users", that would mean "thousands of PCIe x4 SSDs" to fit each and everyone of them this.
on the other hand a centralized server as a SAN server on a 10gbps or faster backbone would work better to offload peak data throughput off the decentralized workstations.
and further more each decentralized workstation can be fitted with a cheaper SSD that has acceptable performance, this at least reduces latency issues for latency sensitive tasks.
this balances peak/off-peak situations while saving huge amounts of cash, while still providing more than enough throughput to each users.

for example, these M.2 SSDs costs 50cents per GB at their cheapest, thats $500 for a 1TB drive.
compared to a 1TB budget drive (BX100?) which costs $300, you save $200 per drive.
sure its 1/5th the speed, but you do still save up $200,000 from doing it on 1,000 machines, except the most critical ones.
now use that $200,000 to build the most ridiculous SAN network you could think of that'd fit in that budget.

tl;dr = the whole point of what i've been saying is how it could be compromised for cheaper yet still acceptable performance.
Edited by epic1337 - 10/23/15 at 1:13pm
post #45 of 71
Quote:
Originally Posted by rtrski View Post

RAID isn't about speed, at least not in the sense of large-scale EDA simulation speed, it's about data availability. It'll saturate a bus better...but won't leapfrog an array of SATA or SAS drives up to PCIEx4 speeds.
I think you have the wrong idea about RAID. Sure 1/5/6 offer redundancy, but RAID 0 offers fully scaled random and sequential performance. Even RAID-1 offers this scaling in read performance. Dividing random IO across multiple devices lets you effectively double the IOPS capable, and also effectively halve the latency seen at QD=2. A typical RST RAID won't beat a 950 PRO because of bottlenecks at the chipset / DMI level (edit: it still scales IOPS very well though up to the throughput limit), but a high end RAID card full of SATA SSDs could beat one on raw throughput. That setup is a far cry from a nice tiny M.2 form factor device though smile.gif.
post #46 of 71
Quote:
Originally Posted by malventano View Post

I think you have the wrong idea about RAID. Sure 1/5/6 offer redundancy, but RAID 0 offers fully scaled random and sequential performance. Even RAID-1 offers this scaling in read performance. Dividing random IO across multiple devices lets you effectively double the IOPS capable, and also effectively halve the latency seen at QD=2. A typical RST RAID won't beat a 950 PRO because of bottlenecks at the chipset / DMI level (edit: it still scales IOPS very well though up to the throughput limit), but a high end RAID card full of SATA SSDs could beat one on raw throughput. That setup is a far cry from a nice tiny M.2 form factor device though smile.gif.

those are behemoths, i saw one x16 Gen2.0 SAS/SATA RAID controller thats capable of up to 16 drives in whatever RAID you choose, i wonder how ridiculously fast would a 16 x SSD RAID0 perform.
post #47 of 71
Quote:
Originally Posted by epic1337 View Post

those are behemoths, i saw one x16 Gen2.0 SAS/SATA RAID controller thats capable of up to 16 drives in whatever RAID you choose, i wonder how ridiculously fast would a 16 x SSD RAID0 perform.

It really does vary by RAID controller. Older ones were optimized for spinning disks and could barely beat a single X25-M on random IOPS (even with four SSDs connected to it). Newer cards work their firmware differently, but any of them that have an active DRAM cache tend to be slower than VCA-type controllers that simply pass requests along.
post #48 of 71
i figured as much, the one i saw was back in 200? something, it has DDR2 dram buffer which is very slow in itself.
it could probably pull off 2GB/s on a 16 x HDD array, but i doubt it could go faster with SSDs instead.
post #49 of 71
Quote:
Originally Posted by malventano View Post

It really does vary by RAID controller. Older ones were optimized for spinning disks and could barely beat a single X25-M on random IOPS (even with four SSDs connected to it). Newer cards work their firmware differently, but any of them that have an active DRAM cache tend to be slower than VCA-type controllers that simply pass requests along.

The fastest and the best raidcontroller for ssd's now, is Areca 1883 ix. It have low latency and fast ddr3 cache. Too bad sata ssd's is so slow compared to pci-e nvme ssd's smile.gif
post #50 of 71
Quote:
Originally Posted by Nizzen View Post

The fastest and the best raidcontroller for ssd's now, is Areca 1883 ix. It have low latency and fast ddr3 cache. Too bad sata ssd's is so slow compared to pci-e nvme ssd's smile.gif

wait for NVMe to get more common, budget ones should pop up sooner or later.
the chances of M.2 RAID cards popping up as well is high, this simply means you can build cost effective high-speed RAID array for your own use.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: SSD
Overclock.net › Forums › Components › Hard Drives & Storage › SSD › Samsung 950 pro 512 GB vs Intel 750 400 GB