Originally Posted by Tex1954
I want to know the real clock speeds and such. 128 threads at 1GHz is not as good as 64 @ 3GHz.... etc...
Having said that, VERY INTERESTING and I wonder if the pricing will be somewhere in a range normal folks can afford...
While I understand the sentiment, it kinda just boils down to a question of workload. Certainly in some cases you are entirely right, which is why intel has the W series processors in its product line. However there are many situations in data center where the opposite holds true.
We all ruefully joked for years with 'moar coars' from AMD, but in the data center market it is a very compelling argument for databases, document stores, and web servers. More simply put, if you have a high volume of simple tasks to complete, more slow cores is actually far far
better than much fewer faster cores. If you don't want to take AMD's word for it after being burned so many times, Intel's Xeon D series is a perfect example.
Originally Posted by SystemTech
So a maxxed out config :
2 X CPU's totalling 256 threads/128 cores
2TB RAM : 64GB
Chips in 32 slots
Lots and lots of IO
Haven't 128 GB ecc sticks been around for a while? They really didn't talk about much in the video, but I kinda get the impression we're closer to 256GB per dimm (even if its not possible yet
) than 64 GB on the maxxed out config.
Originally Posted by nakano2k1
Quick question, is the interconnect that is being used by Radeon GPUs to eliminate the need for crossfire cables similar to infinity fabric? Or is it just a totally different concept?
I don't think they are doing anything special, I believe they are just communicating over PCIe.