Anyone reading this, be sure to check out the original thread: http://hardforum.com/showthread.php?p=1040106039&highlight=#post1040106039
@rui-no-onna: "1TB (32GB x 32) DDR3 1600 Registered ECC" From what I've seen on the overclock forums, it's not necessary to have more than 64GB for a RAM disk system, since the typical users of it can run it on a 1P configuration. A clean install of Win8 Pro will be less than 30GB, at most 16GB is needed for system memory, and programs for benchmarks will take up a negligible amount of space. The capacity system has higher latency and lower clock speeds. How well do you think a capacity system would perform against a system with overclocked RAM?
"infinite performance scaling on all applications" If you're referring to page file hits or read/writes from RAM to the HDD, yes, it would be zero. The IO performance isn't infinite in a RAM disk system, however--the fastest sequential read speeds in iometer I've seen were around 7GB/s. http://www.storagereview.com/patriot_memory_viper_xtreme_division_4_ddr3_ram_disk_review
Note that they didn't overclock the RAM to get those speeds, and it was with 16GB. Do you have any links to the fastest RAM disk system record to date?
"Of course, you never turn off power either" RAM disk systems have the option of saving data to secondary media once it's booted, which still means that if the user did want to use it for more than confirming theories, the 71605 with sixteen drives would keep the rig faster in that aspect. In between boot and shutdown, the system would have no use for any kind of PCIe lanes from an internal IO standpoint. If the user wanted to get the data to the network, "100GbE Fiber" wouldn't be as good as Infiniband FDR's 13.64Gb/s rating. I haven't been able to find a single card anywhere that includes Infiniband EDR 12x; the best one I could find was the MCB194A-FCAT, which has two FDR ports, a total of 3.41GB/s. Two of the MCB194A-FCAT's would be able to output the write capacity of the RAM system, assuming that the write speeds would be in the 4GB/s range. Do you have any links to an Infiniband card conforming to the higher specs?
"4x Xeon E5-4600 series Sandy Bridge-EP 8C/16T in 4P board" You didn't mention which E5-4600 series CPU would be used, but a 4P board would outperform a 2P board anyway. In the original thread, AndyE: "I have seen many situations, where the move of the same IO config from a single LGA2011 CPU to a 2P to 4P system leads to a decline in the absolute performance of the solution." There are also no 4P boards that can overclock, to my knowledge. I've only seen one board from Supermicro, and one from Asus (supposedly) that can run higher RAM speeds than 1600MHz. Can you think of any 2P boards with higher RAM speed support?
"5.12TB ioDrive Octal + SATA III/SAS2 SSD (small payload for initial boot, basically only to transfer data from the non-bootable ioDrive to the RAM)" You didn't mention which SSD, how many SSDs, or controller would be used in this configuration. What do you mean by the small payload? Is this a command line utility mounted on the SSD that would initiate the transfer of the OS files from the Octal to the RAM? Was there a specific program you had in mind? On the Octal versus SSD array, from the original thread on hardforum: "you can't improve the speed of 4k qd1 or 512b qd1 with raid. the file is too small to split into chunks at that point, you can only improve the 4k at higher queue depths with raid. 4k qd1 can never be increased due to raid." -gjs278
I agree with gjs278 regarding 4k capabilities of PCIe 2.0 cards, but I agree with AndyE that the bottleneck would hurt performance--maybe if it were saturated with 4k files or the less frequent larger files." I haven't gotten any feedback from the last post, which is the reason why I posted the link to the thread on the overclock forums. The link you posted earlier includes benchmarks showing a multi-drive SSD array system's performance versus a single drive, and there are several articles on storage review dot com that cover RAID cards with at least sixteen drives. I mention in the original hardforum thread that fifteen drives on a RAID card is the maximum number before the bandwidth limit on the x8 PCIe 3.0 slot is mentioned, and that sixteen would only be needed if there is some kind of RAID technicality that requires it. What is your input on Octal versus SSD spam systems?
"Real-world, even a 2-way RAID-0 of Samsung 840 PROs would net you very little performance improvement over a single SSD set-up when it comes to gaming and most consumer tasks." This was mentioned in the hardforum thread also--due to the way most consumer tasks are programed, they do not take full advantage of multi-processor or multi-core systems. This thread assumes that any program on the maximized IO system would be written to take advantage of the latest technology, instead of backwards-compatibility with 1P, single-thread systems. Does program optimization extend to arrays as well? Writing a program to be broken down so that it would perform better in a RAID would result in a faster program, and that any programs that do would be written more for a database environment. Do you have any links to benchmarks that tested a game on a multi-SSD RAID versus a single SSD?
"dedicated 16-port RAID card with BBU and 16x 256-512GB SSD RAID-0" Which controller? Which SSD? The battery backup (BBU) is more of a practical common-sense thing than something that would be chosen over a card with greater performance. Do you know if it's possible to mod any RAID controller to support a BBU?
"With that scenario, really, you just throw money at the problem" There's hundreds of different products and configurations to choose from, some of which are made for specific scenarios (i.e. complete lack of PCIe 3.0 in 8P configurations). DDN's solutions have higher IOPS than a single system, but is due more to the software than the hardware used; yet their systems cost millions of dollars, and lack the superior hardware of a single-unit configuration. DDN isn't scaling their systems as much as they theoretically could, and since it's all proprietary, the upper limit of their software's scaling is unknown. It's compounded by a lack of comparison on the performance of a single DDN unit versus the theoretical build. It could be that were the theoretical build scaled to the number of units in a DDN server array, it would perform better than the DDN multi-system, but the kind of software this would require is unknown, as is whether that software would perform better than what DDN hires people to optimize. There is not a direct relation between cost and performance (otherwise the Apple I system that went for $671k would be superior to most of the builds in the overclock forums).