A lot of you guys seem to be looking at this wrong. These don't need to be faster than DRAM, they just need to be fast enough, and more importantly high capacity and cheap enough, to change the way we use DRAM.
The traditional paradigm was to read all data that must be used off of the disk and copy it into DRAM. It is then worked with in DRAM and then kicked back out to make room for the next data that is being read off the disk. If the previous data needs to be used again, it has to be loaded back off the disk again and into DRAM. This system works, but isn't terribly efficient.
In the last few years, we have finally gotten enough DRAM that it is feasible to preload the majority of data that might be used for a particular program into DRAM and leave it there until that space is needed for something else or the program is exited.
This helps considerably because we aren't wasting as much time reloading the same data over and over, but even if we got to the point where we could have enough DRAM in a system to load the entire disk into DRAM upon booting the system (which, frankly, we will probably never get to), we would still hit that major bottleneck of disk read speed. Reading the entire disk into DRAM would take quite a long time, several minutes or even hours depending on the disk size and speed, every single time the system was powered back on. It would also leave all your data in a vulnerable state unless all changes were mirrored back onto the disk, which would be an additional bottleneck.
Around the same time, we got SSDs, which also help considerably because they cut down on that major bottleneck. They can serve data into DRAM much quicker than HDDs, but unfortunately, they are still nowhere fast enough to actually remove the bottleneck. They do however make pre-caching (relatively speaking, obviously caching the entire disk would still be absurd) large amounts of data more feasible. It is a step in the right direction, but that bottleneck is still there forcing us to flip flop data between disk and DRAM constantly, which in turn forces us into needing larger and larger capacities of DRAM for all that caching.
If Optane can provide us with a disk that is large and cheap enough to replace SSDs and HDDs, while living up to its claim of being within an order of magnitude the speed of DRAM, it could flip that whole scheme on its head.
SSDs with their hundred thousand times higher latency than DRAM aren't going to work for all but the least demanding of data processing workloads (streaming audio or video up to a certain resolution, and working with text, is probably about it).
HDDs are even worse at four million times higher latency. They're basically completely useless without DRAM.
Optane at only 10 times the latency? Most daily tasks could be read and written directly from/to the disk, skipping DRAM entirely. Without having to expand DRAM capacities at all, we would suddenly have 3-4 times the effective DRAM capacity, because all that space currently taken up by low-priority latency-insensitive data would be freed up.
This would compound with the fact that when data is transferred between DRAM and disk, the bottleneck will be lessened by many orders of magnitude, just as it was when switching from HDDs to SSDs, only even greater.
Furthermore, because less data needs to actually be cached into DRAM, the frequency at which that bottleneck will occur will also significantly decrease.
So you see, Optane doesn't need to replace DRAM to be a game changer, it just needs to be fast enough to start synergizing with DRAM instead of holding it back like current disk technology does.