Overclock.net › Forums › Components › Hard Drives & Storage › SSD › Samsung 950 pro 512 GB vs Intel 750 400 GB
New Posts  All Forums:Forum Nav:

Samsung 950 pro 512 GB vs Intel 750 400 GB - Page 4

post #31 of 71
The benchmarks comparisons have to be taken with a grain of salt, moreso than usual. It's apparently very hard to get head-to-head comparisons because of the quirks of NVME drivers.
post #32 of 71
Quote:
Originally Posted by quipers View Post

The benchmarks comparisons have to be taken with a grain of salt, moreso than usual. It's apparently very hard to get head-to-head comparisons because of the quirks of NVME drivers.

The driver is no different than what a GPU driver does for a video card, except NVMe storage drivers generally stay the same from launch. I've seen no real 'quirks' to them. They are just more optimized for that particular SSD than the default Microsoft driver, and therefore enable higher speeds.
post #33 of 71
Quote:
Originally Posted by rtrski View Post

Nothing against Samsung (actually just picked up an 850 Pro 1GB for my general storage needs) but I picked Intel 750 over the 951 - or waiting for the 950 Pro - because of the PCIE slot implementation with a nice spreader vs. the M2 slot.

Review sites are working these SSDs significantly harder than typical or even power user usage. We are hitting them with synthetic workloads that are easily generated, as compared to real use where you are moving (far less) real data around. Power users wouldn't normally have enough 'real work' for the SSD to cause it to actually heat up enough to throttle. I had to write over 100GB sequentially to a 950 Pro 512GB to get it to throttle with 0 airflow across it. No power user is going to have 100GB worth of data to be written at >1GB/s constantly to even put on this SSD in the first place, and even if they did, it was still writing at >1GB/s even when throttled.
post #34 of 71
Quote:
Originally Posted by malventano View Post

...Power users wouldn't normally have enough 'real work' for the SSD to cause it to actually heat up enough to throttle. I had to write over 100GB sequentially to a 950 Pro 512GB to get it to throttle with 0 airflow across it. No power user is going to have 100GB worth of data to be written at >1GB/s constantly to even put on this SSD in the first place, and even if they did, it was still writing at >1GB/s even when throttled.

Heh. Read up on finite element modeling - especially in the electromagnetic realm with products like Ansys HFSS. I regularly do need to write that size of data not just in generating final output files but in interim operations for temporary scratch files as well. Maximizing RAM is obviously the biggest knob in solution speed relative to # of unknowns (so is # of processing paths - core count), but after that disk I/O is the next significant bottleneck. (And beyond that, networking, if you're trying to run a distributed solution on 8 workstations which each have 24 cores and 256GB RAM...Ethernet bites your solution speed in the tail hard. Need infiniband.)

Granted I'm building for home and gaming - not for work - so your point is valid for me; I just feel a bit more comfortable with the larger form factor. Same reason I'm overprovisioning on my CPU cooler relative to my actual desire to overclock.

I do intend to see if I can benchmark the speed difference between having design files and temp scratch space defined on a PCIEx4 vs. SAS drive...hope to use the data to convince work to make some more optimal purchase decisions for the workstation configurations we use there.

The downside is writing 2-300 GB of temporary files, several times, followed by a similar amount of final field and mesh data per simulation may eat up the wear leveling allocation in a hurry. For work I'm actually recommending these not as the OS/program drive for that reason, but as replaceable computation space. Of course by the time I convince our IT department of the benefits Intel's new thingie will be out (XPOINT) which, assuming it has speed approaching that of RAM, is nonvolatile, and doesn't have a wear date on it like NAND, will be a godsend.
Edited by rtrski - 10/23/15 at 10:46am
post #35 of 71
Thread Starter 
post #36 of 71
Quote:
Originally Posted by rtrski View Post

false
Quote:
Originally Posted by malventano View Post

...Power users wouldn't normally have enough 'real work' for the SSD to cause it to actually heat up enough to throttle. I had to write over 100GB sequentially to a 950 Pro 512GB to get it to throttle with 0 airflow across it. No power user is going to have 100GB worth of data to be written at >1GB/s constantly to even put on this SSD in the first place, and even if they did, it was still writing at >1GB/s even when throttled.

Heh. Read up on finite element modeling - especially in the electromagnetic realm with products like Ansys HFSS. I regularly do need to write that size of data not just in generating final output files but in interim operations for temporary scratch files as well. Maximizing RAM is obviously the biggest knob in solution speed relative to # of unknowns (so is # of processing paths - core count), but after that disk I/O is the next significant bottleneck. (And beyond that, networking, if you're trying to run a distributed solution on 8 workstations which each have 24 cores and 256GB RAM...Ethernet bites your solution speed in the tail hard. Need infiniband.)

Granted I'm building for home and gaming - not for work - so your point is valid for me; I just feel a bit more comfortable with the larger form factor. Same reason I'm overprovisioning on my CPU cooler relative to my actual desire to overclock.

I do intend to see if I can benchmark the speed difference between having design files and temp scratch space defined on a PCIEx4 vs. SAS drive...hope to use the data to convince work to make some more optimal purchase decisions for the workstation configurations we use there.

The downside is writing 2-300 GB of temporary files, several times, followed by a similar amount of final field and mesh data per simulation may eat up the wear leveling allocation in a hurry. For work I'm actually recommending these not as the OS/program drive for that reason, but as replaceable computation space. Of course by the time I convince our IT department of the benefits Intel's new thingie will be out (XPOINT) which, assuming it has speed approaching that of RAM, is nonvolatile, and doesn't have a wear date on it like NAND, will be a godsend.

Price/GB just needs to get well below DDR4 for Xpoint to be competetive. If those hexa-channel Skylake-E rumors are true, stuffing a 2P board with 24 sticks of DDR4 might be more practical.
Bruce
(20 items)
 
  
CPUMotherboardGraphicsRAM
4670k Asus Z87 Pro HIS 7950 IceQ X2 2x2gb + 2x4gb DDR3 1333 
Hard DriveHard DriveHard DriveOptical Drive
Seagate 1TB 7200RPM OCZ Agility 4 128GB PNY 240GB LG Blu Ray Burner 
CoolingOSMonitorKeyboard
Hyper 212+ with extra fan Windows 10 Education x64 Shimian QH270 @110hz Medieval Dell OEM Keyboard 
PowerCaseMouseAudio
Corsair TX750 V1 Antec 300 Black Illusion  Logitech G400s Xonar ST 
AudioOtherOtherOther
Fostex T50rp with BMF mod Archer T9E Wifi adapter 2x Yate Loon D12SL-12D 120x38mm fans Thermalright TY-143 fan 
  hide details  
Reply
Bruce
(20 items)
 
  
CPUMotherboardGraphicsRAM
4670k Asus Z87 Pro HIS 7950 IceQ X2 2x2gb + 2x4gb DDR3 1333 
Hard DriveHard DriveHard DriveOptical Drive
Seagate 1TB 7200RPM OCZ Agility 4 128GB PNY 240GB LG Blu Ray Burner 
CoolingOSMonitorKeyboard
Hyper 212+ with extra fan Windows 10 Education x64 Shimian QH270 @110hz Medieval Dell OEM Keyboard 
PowerCaseMouseAudio
Corsair TX750 V1 Antec 300 Black Illusion  Logitech G400s Xonar ST 
AudioOtherOtherOther
Fostex T50rp with BMF mod Archer T9E Wifi adapter 2x Yate Loon D12SL-12D 120x38mm fans Thermalright TY-143 fan 
  hide details  
Reply
post #37 of 71
For the typical home user, sure. For the corporate user doing high end analysis - nonvolatility at an order of magnitude faster storage than PCIEx4 can command a price premium, I bet, at least for a little while until adoption increases and supply stabilizes.

I don't recall PCIE-based RAM cards going much of anywhere, as another comparison.

p.s. somehow I committed a cut/paste error trimming the quote from malventano in an earlier post, so I looks like I was snottily saying "false" at the top. I didn't notice until you quoted me. Sorry about that - sincerely not intended!
Edited by rtrski - 10/23/15 at 10:48am
post #38 of 71
Quote:
Originally Posted by danielhowk View Post

Anyone knows if the m.2 slot is faster or the with a m.2 hyper kit for PCIE lanes like graphic cards .
Example https://www.google.com/search?q=asus+hyper+kit+for+samsung+950pro&source=lnms&tbm=isch&sa=X&ved=0CAkQ_AUoA2oVChMI_eqgzo3ZyAIVAgmOCh3PcgKB&biw=1920&bih=955#imgrc=IO2kb1glI_7-WM%3A

It's the same PCIe x4 link either way.
post #39 of 71
Quote:
Originally Posted by danielhowk View Post

Anyone knows if the m.2 slot is faster or the with a m.2 hyper kit for PCIE lanes like graphic cards .
Example https://www.google.com/search?q=asus+hyper+kit+for+samsung+950pro&source=lnms&tbm=isch&sa=X&ved=0CAkQ_AUoA2oVChMI_eqgzo3ZyAIVAgmOCh3PcgKB&biw=1920&bih=955#imgrc=IO2kb1glI_7-WM%3A

Daniel:

I don't think I've seen anyone specifically indicate that using the M2 vs. the PCIE slot provides more speed, aside from *if* thermal throttling happens (which malventano correctly points out is pretty unlikely) or if it's a motherboard issue related to latency depending on whether processor or chipset lanes are being allocated to the given interface, whether there's an extra layer of PCIE switching involved, etc.

Most of the motherboards I've been looking at to spec out my upcoming system (X99) indicate this slot or the other gets disabled with the M2 slot in use, so that kind of implies they use the same lanes and would work the same if you put something in either socket.
post #40 of 71
Quote:
Originally Posted by rtrski View Post

For the typical home user, sure. For the corporate user doing high end analysis - nonvolatility at an order of magnitude faster storage than PCIEx4 can command a price premium, I bet, at least for a little while until adoption increases and supply stabilizes.

I don't recall PCIE-based RAM cards going much of anywhere, as another comparison.

corporate with high-end equipment won't be relying on SSDs for their workload, SSDs are fragile to an extent, a server doing continuous writes would burn through it's life within a few short years.
these high-end equipment would instead use multiple arrays of 32slot FB-DIMM compatible boards, they have huge power-backups to sustain their racks for reliability and sustained work even without power from the grid.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: SSD
Overclock.net › Forums › Components › Hard Drives & Storage › SSD › Samsung 950 pro 512 GB vs Intel 750 400 GB