Originally Posted by Particle
As a technology, NAND flash is never going to eliminate that bottleneck. There is too much latency for it to seriously compete with DRAM or SRAM. It's about a thousand times slower than DRAM to start returning data no matter its ultimate data rate.
Honestly i think that people in ultra-enthusiast segments are just gonna get to the point where instead of spending $500 on a ~512GB 3000gb/s NAND based SSD, they'll just buy a kit of the cheapest 128GB DDR4 they can find, set up 16gb or 32gb of it as regular RAM (more than enough unless you're a premiere pro freak. For gaming and non-professional editing 32gb is PLENTY really) then take the remaining 96GB to 112GB (depending on if you kept 16GB or 32GB) and make a RAMDISK out of it.
I mean hell, if you really wanted performance and could pay the cost, you could always just get like the Xeon E5 1660 V3 or 1680 V3 etc.. that are fully unlocked 8 core 16 threads literally same as 5960X except with Xeon capability. Then get like an Z10DPE more server-type board that can handle ECC RAM etc.. and load up with a couple hundred GB of ECC (i know Xeons can handle up to 768GB now, but i think E5 Haswell and Broadwell-EP ones are limited to 256GB or so i think. iirc only the E7 "EX" Xeons have that full 768GB and only on dual socket boards) and use that as RAMDISK assuming the board supports RAMDISK.
Honestly, combining a ~100GB DDR4 3000mhz RAMDISK or something, with a board with M.2 + U.2 so you could have TWO of like these new SM961 or 960 Pro etc.. with 3200mbps Random Read and 1800mbps random write; then RAID 0 them ending up with more like ~4000mbps read + ~2500mbps write which would still be insanely fast even though the true-latency is nowhere near that of DRAM.
And if you combined these kind of storage strategies with the new technologies coming out in the next chipsets like PCI-e 4.0, NVLink etc.. you have a TON of potential. Imagine a system like this:
CPU: The New LGA 3647-ish Xeon Phi 72 Core Processor/Co-Processor (can do both) with built-in MCDRAM (mcdram is like HBM in a way, it's a super high bandwidth memory (4x more bandwidth than DDR4 DRAM!!) that can be used as like an L4 cache on the Knights Landing Processors and Co-Processors like this). So not only do you have a monster 72 core CPU, you have insane high bandwidth MCDRAM as L4 cache.
GPU: Two way SLI of GTX 1180 TI with 8GB or 16GB of HBM2 1024GB/S (1tb/s) bandwidth; running at 16x PCI-e 4.0 lanes (double bandwidth of 16x 3.0 lanes) with the new double bandwidth HB SLI Bridge which runs at 650mhz unlike the 450mhz of fastest older SLI bridges. And connected via the NVLink providing up to 10x faster bandwidth on top of this through the PCI bus.
RAM: Between 128GB - 256GB of HEXA Channel (6 channel) [Possibly even something like "DDR5 5400mhz C21" or something]
Storage: RAID 0 Samsung 960 Pro 1TB PCI-e NVMe SSD (~4000mbps read + ~2500mbps write)
Can you fathom the memory throughput possible on something like that? A 72 Core CPU with monstrously high bandwidth L4 cache + DDR5 5000mhz+ RAM with Hexa-Channel providing essentially ~5-6 times more bandwidth than current traditional DDR4 ~2800mhz or something, with ~100GB of that insane bandwidth RAM running in a RAMDISK as storage. Multi-GPU solution with double bandwidth SLI bridge with a 44% faster operating frequency on top of that double bandwidth, and 1tb/s VRAM bandwidth due to HBM2, and up to 10x faster overall throughput via NVLink etc..etc.. and then RAID 0 of the NVMe 1TB 960 po / sm961 SSd's etc.. You're EASILY looking at 10-20 times higher bandwidth over current maximum capabilities! If not MORE!
The future has potential....Edited by DarkIdeals - 6/23/16 at 1:02am