Originally Posted by the_beast;12098827
I'd be interested to know where you get the 73% of performance stat from - the latency alone will kill off performance much more than that, not to mention the bandwidth caps due to the PCIe buses.
That's depending on the boards' capabilities.
I've seen those kind of virtual memory interfaces previously, in a resort network our company managed.
It had 4 HP ProLiant G6 (2x quadcore Xeon Conroe-based with 4GB DDR2-667 registered & FB each) servers on RAID0+1 made of Seagate Cheetah SCSI drives, 74GB drives. The servers were wired together along 1km^2 of land with 8.4Gb/s optical interfaces with PCI Express LAN cards.
It performed incredibly well, don't know up to what level, but performance was more than 10x the performance of the virtual memory using only system drives.
All wired up to a kind of datacenter that contained databases of millions of customers, and thousands of registers of different parameters from all over the resort. The processing was done on the servers, then sent to the datacenter. The datacenter buffered all the data and spooled it to the RAID drives inside it.
So yes, it works. Not up to a physical RAM level, but sure it does work WAY better than regular virtual memory on a single system disk. And if you swap HDDs for SDDs, performance will be way better.