Originally Posted by Aenra
Very sad to read this; they borked it for me..
My complaint thus far:
I'm not a big fan of the, er, internal binning kind of situation going on since the first gen; makes them a great deal of money what with one waffer for everything, sure, but at my expense. Something similar to the old Black versions would be nice to have again.. not many need 4895652 cores for mainstream usage, but everyone can benefit from higher clocks. Their current way of doing things eliminates such a possibility. Lower core counts? You get the worst of the worst. Higher core counts is where it's -theoretically- at, except when having 38467452 active cores, you still don't get to see much of it in the end. So even if i was a well-off mainstream user, willing to spend twice as much? I still could not get something with the clocks to show for it.. just (extra) cores i don't need.
And yes, a couple hundred of MHz wouldn't change much. But even this little can have tangible benefits.. that i pay for; a 15th or 16th core? Err, none if we're to be honest?.. Again, this is the mainstream segment, right?
And now, on top of the above, no 16-core TR..
Sure, i get that too, they pushed 16 down to 'mainstream' (lol...), so "naturally", they now needed a minimum of 24 (given how it works with CCDs) for TR. Yet again, and on top of the above mind you, money for nothing.
Sixteen cores with SMT was already way too much for a lot of people, but if you wanted the extra lanes (and i need them, not just want them) or the doubled memory bandwidth, you went for it. Now you need pay even more, for yet more cores that, you guessed it, you don't really need.
* and btw, this time around? One I/O for the memory, so until we get some data? Not even sure if it's gonna be like before. Easier to push memory when each die has a separate controller, harder when it's all in one; might be we're moving on to something a lot more similar to what we see on the Intel side of things (memory freqs in mainstream vs HEDT platforms)
Was honestly hoping it'd start from 16, the cherry being, as before, the quadraple memory channel support and the extra PCIe 4.0 lanes.
W-T-F are they thinking.
(again, rhetorical.. i know what, and why. Now your average stingy ""creator" can get ""moar cores"" even cheaper, to run them with two cheapish sticks on a 200bucks mobo and ""create"" youtube videos for his 21 total subscribers faster. A major victory folks)
Moving the 3900X to my S8. Such is life, lol
You have little to no understanding of power limits and boost/turbo if you think you are not seeing clocks as high as they can go on just a few cores regardless of the number of cores in the CPU. An idle core draws under a watt, you are not being power limited. Look at the 2950X and 3900X.
They sold the 1900X, give them time.
Since when? Got ANY data to back that up? There have been exactly two archs (TR1 and TR2) that have ever had separated memory controls on one chip that could also be overclocked and they certainly do not perform as well as Intel's unified memory controllers nor any better than their normal Ryzen counterparts. The Ryzen 3 IO die has already shown to be much better than Ryzen 1/2.
Originally Posted by DNMock
Honestly the biggest reason I can see for upgrading in general is to get PCIE 4.0 for general use. There isn't much out there that most people use that can really stress a 7900k or Ryzen 1700 CPU.
4.0 is one of the smallest reasons to upgrade for a few reasons actually.
Hardware that CAN take advantage of 4.0 is currently limited to GPUs and SSDs. That is all well and good, but SLI and XFire are dead, so we don't need to run our GPUs in x4 slots in X570/Z390 boards and GPUs do not need more than 3.0 X8 at the moment (or in the near future). PCI-e 4.0 NVMe SSDs are as expensive and less practical than Optane on 3.0.
Things you would want more lanes for however are obvious; fit more things. But storage and network cards are still stuck on 2.0/3.0 (IE, most used 2x10gbps cards are 2.0 x8, most 40g cards are 3.0 x8) with no reasonable option for us to buy 4.0. These cards taking so many (slower) lanes make going X399 more practical than going 4.0.
Na, the biggest benefit to TR3, over those of us with TR1, is simple; Massive single thread performance improvements from both IPC and clocks, lower power, better IMC, and fewer NUMA issues. Which is why we all want it to work in X399, as the platform is already overkill.
Originally Posted by ozlay
For the 2000 series they saved the lowest core count chip for last. So i can see them doing it again for the 3000 series. So maybe we will see a 16c/32t part. Sometime next year maybe?
Would be nice if they could make a low powered chip as well. For a low powered NVME nas. Threadripper-E series?
Could just buy a SuperMicro NVMe chassis with a single 8-core Epyc chip, but what good is hyper-fast storage without the network to back it up?
Originally Posted by J7SC
I have not read anything about the following, so pure speculation, but may be they'll have one or two SKUs (24c, ?) for the existing X399, and then a few more for the new TR40/TRX80 chipsets?
Socket is the same, so the only thing that will be stopping you from putting bigger chips in X399 will be power delivery limitations (the X399 Creation is laughing right now at the idea) or BIOS support limits.
X399 is very
expensive, so I'd hope they update the boards.