Well that was quite easy to test. A single threaded application will never be able to have more than the peak transfer rate of a single DIMM.
Multi-threaded ones though should be able to get as much as the CPU can handle.
Unfortunately my CPU isn't fast enough for memory to become the bottleneck , otherwise I would have liked to test how close each program can get to the theoretical peak memory bandwidth and how they perform when they have to use multiple channels of memory (when under a multi-treaded load)
I have also added a section talking about the different bottleneck a RAMDisk can encounter. For example, the PC ASUS tested their RAMDisk on probably had 1600MHz memory which I why they got around 12GB/s as the top read speed with CrystalDiskMark.
You would think they would know better and use higher frequency memory.
Bottlenecks CPU: In most cases memory bandwidth will be limited by the speed of the CPU. There is no easy way to tell how much bandwidth the processor can handle, the only way to know for sure is to test it with a benchmark. That is because even with identical clocks this value will vary based on the processor architecture and memory controller, but as a general rule of thumb higher frequency result in more memory bandwidth.
RAM: Unlike with CPUs, it's very easy to determine the maximum memory bandwidth of DDR2/3. Simply multiply the memory frequency by 8 (since DDR2/3 transfers data on a 64bits wide bus) to get the peak transfer rate in MB/s. For single threaded application this will be your peak bandwidth. To determine the peak bandwidth for multi-threaded application you need to multiply the peak transfer rate by the number of memory channels your CPU supports (either 2, 3 or 4).
Now compare that value with the one you got for your CPU. If they are very close then your memory is likely limiting your memory bandwidth, otherwise your CPU is the bottleneck. This is useful information to have when trying to determine which product to buy to optimize your memory bandwidth. It can also help to get a better understanding of why Anvil's, ATTO and CrystalDiskMark report different results in certain cases.
ATTO and CrystalDiskMark use a single thread to determine a drive performance. So for example, with 1600MHz DDR3 those tool will never report speeds higher than 12.5GB/s for a RAMDisk. On the other hand, Anvil's use 4 thread for tests with a queue depth of 4 and 16 for those with a queue depth of 16 and therefore could report higher speed in those tests.
I have read your thread with some interest and I have two primary questions, the first would be to ask; would there be a performance increase running a benchmark program such as 3D Mark 11 from a RAMDisk? The second comes from you most recent post concerning single-thread versus multi-thread applications, does that mean that the AMD processors would make more efficient use of RAMDisk because of their better multi-thread abilities?
would there be a performance increase running a benchmark program such as 3D Mark 11 from a RAMDisk? The second comes from you most recent post concerning single-thread versus multi-thread applications, does that mean that the AMD processors would make more efficient use of RAMDisk because of their better multi-thread abilities?
That depends how the program runs, if the benchmark loads itself in memory before starting then there shouldn't be any gains, but if it reads heavily from the drive there might be some.
You can run Resource Monitor and under Memory there's an Hard Faults/sec column which will tell you how often the program reads from disk.
Originally Posted by Roulette Run
does that mean that the AMD processors would make more efficient use of RAMDisk because of their better multi-thread abilities?
Unlikely, it would require programs to be optimize for it and use more than 1 thread to read data from disk. As far as I know most program do not and I don't see that changing in the near future.
thank you for an informative post that has tremendous potential in enlightening much of the future complexity of storage management. Already Google , facebook and others have established a file system that spans the internet- creating a vast virtual hard disk. Clearly, caching algorithms play a significant role as speedy local storage has to be invisible to the great and vast internet repository for their web apps. Ram disks and local permanent storage are an interesting model for the problems that are intrinsic to this 'multi-lithic' approach to storage.