We don't actually have any KNOWN facts to extrapolate on, besides the Zen chip finishing the rendering may be 1 second before the Intel one in what looks like an approximately 50 second workload. How gimped was the quad-channel DDR4 memory system on the Broadwell-E system? Rendering workloads do take advantage of memory bandwidth. Lack of transparency in the controlled experiment should mean questions, not conclusions.
Ok then, for sake of argument.... lets say they both tied (in blender).
And that at 3ghz they have roughly the same raw processing power & crunch. What would that mean/imply at 3.8ghz..? 4ghz?
Less performance, or more.. on both chips? How much more, or how well they scale is to be determined. Whats not, is the giant leap forward that is Zen.
We are after all... talking about a 6900k ($1,100). You fail to recognize that Zen is hitting that high... and YOU DON'T EVEN KNOW what it will clock at. Or what the wattage, or thermals are going to be. But we do know, it won't cost $1,100 ...! Not even half that..!
How can you be jaded by Zen's release?
Let's compare a 2012 CPU to 2015-2016 ones, that makes sense, right?
That's just straight out one of the least useful graphs I've ever seen.
6900k is roughly 50% faster than the 4790k at stock. If you downclocked it to 3ghz the advantage should be around 30-40% for the 6900k. The 4790k is obviously faster with <5 threads used.
Right, given that^, look how much more robust Zen is, compared to Haswell. It looks to compete with kaby lake.
Because the real comparison is not going to be at 3Ghz, but 4ghz. And we have no reason to believe that Zen scales badly with Ghz.
AMD can win on price. 8-core AMD Zen @ $499 ..?
When, how fast, how much $.
Until then all this is just teasing, just like they are teasing with P10/11 which is almost inexistent at retailers.
Not trying to be a downer nor a fanboy but someone posted this in the comments.
At this point, its either the real deal in its launch or nothing at all.
and we saw how that played out...
Intel initially had plans of going 6 and 8 core mainstream with Kaby Lake. Try $350.
Inner liberty can be judged by how often a person feels offended, for you can no more insult a mature man than you can paint the air. -Vernon Howard
True its more of a personal benchmark than a realistic expatiation. To keep my 64gb way of life quad channel helps keep me from using stupidly dense dims. For normal workloads quad channel is really not going to make any meaningful difference in usable bandwidth over dual.
Well, if you don't spread the ram in a quad it is useless to begin with. What I've commonly seen is people using 4x16's in a quad using only 4 slots, if you gotta have 64gb then a board should get 8x8gb sticks, the on-board bank scheduler is better than the depth scheduler in the DIMMS themselves. I've seen the difference on a LGA2011v1 with 3960x on it, its actually detectable. But quad DEFINITELY works better than dual when you have the same ram total.
The end problem here is a fairly poorly known fact that even win10 operates in 16GB program ranges, so having more than 32gb on a win7/win8.1 system doesn't make any difference to performance of a single task and only a marginal increase on win10.
However, the server Zen hardware will be accessible by the public just like the G32 and G34 hardware was, it's just the Opterons really weren't gaming chips because of their low clock, they couldn't make use of AMD's super high single core clock and functionality down in the 2.2 to 2.8 range. I've got a twin G34 and I've never found a task of any kind that would actually make it chug even when I process 600 exposure composite astrophotography stacks in a program called Pixinsight (which is excellently coded for multithread, sucker drives my 4930 harder than folding@home pushed all the way up)
The flip is that if AMD actually manages to drive dual channel at DDR4 at 4000, the chip may well be more rockin than the i7-x7xx chips! We're also seeing rumors that the Zen comes in a 8core+2cuAPU variant for workstation use. Which while not great for gaming would definitely be a rocket of a chip if it can leverage the CU's for floatz.
Gotta wait and see.
Gonna be fun tho when Starship comes out the gate.