Originally Posted by rluker5
That 25% was from the ram latency on a leaked Zen2 userbenchmark where the latency was 15 or 20 ns higher than the crappy ram Hynix spec, where my laptop with kaby lake and very similar crappy Hynix runs 5ns less. 15ns/60ns = .25, 20/80=.25 . A very rough approximation of a plausible range, but all I have heard so far.
I didn't cherry pick, Anand was first pick. It was a guess. And I was going by the average of averages and mins of all of the games at 720p. Guru3d was 2nd and Ryzen looked worse for gaming. Maybe you have a better cpu limited (not 4k ultra or 1080p ultra with a 470 so there is a spread in performance per cpu larger than random testing noise) source that isn't reviewing either of the compared cpus.
And I'm trying to understand better because I'm hoping for an upgrade. But I want my use case - gaming, web browsing, streaming, shopping, light office, etc to justify it. That's all. I plan on getting a single gpu of the next Nvidia 80 series when it comes out as well if it is a bit over 2080ti perf at about 200w and $800. Still on Z97 era stuff and would like if something came out on the cpu side that could overpower 4k60 without running over 80c on an aio, or a big air cooler. I have a concern over ram latency because I've seen it's effects, but you just dismiss them, even though the main reason the 2nd Ryzen series was faster than the first was latency improvement. My stuff already games like a 7700k and I don't feel like spending 1k on a new mobo, cpu, ram for something over a sidegrade that will be all hot and noisy unless I go for watercooling.
Not getting any new info so I guess I can wait like everyone else.
OK, now that I know where you got the information from, I can address it.
First, I'll post AdoredTV addressing it:
It's a good video. Excellent explanation of caches at the start of the video as well.
Now, on to the issue of it being a single channel of 4GB SR ram clocked at 2666 CL19, which would be glacial compared to my [email protected]
in quad channel SR sticks. If you think that won't add some latency to you, I don't know what you are thinking. Further, as shown, the cache behavior varies in three examples, one of which had tight latency values. It shows that none of them are trustworthy, but that two may have been outliers compared to the other bench that had tight latencies.
I already explained 720P was rejected as unrealistic, because NO ONE USES 720P anymore! Your adherence to a point INTEL fished to tech journalists is really.... And wanting to embellish performance benefits through making it look more spread out than any experience likely is ... sad! Especially when I said use a 2080 Ti to try to get the CPU to bottleneck at 1080P is proper, as you want the MOST powerful GPU you can use to remove the GPU being the limit to see how much the CPU is hitting performance. So why did you bring up a 470? For someone saying earlier that I needed to focus on non-synthetics, but real-world use, that seems ... yeah.
As to what you want, you just have to wait until June to July and it will be tested in all scenarios. You can make your choice on empirical data. Also, AMD graphics cards are loud, but the CPUs are not bad. And are you talking rendering on the CPU or rendering on the GPU, which 4K60 would be a GPU limit on anything lower than a 2080 Ti and above over the CPU. As you increase resolution, you increase the GPU load, which decreases the load on the CPU. That is the reason as you go to 1440p and 4K, you see the frame rates get closer or sync to within a frame or so of each other.
So just wait for reviews and make your choice on the hard data in front of you.
Here is what my latency is in that benchmark on a 1950X:
Here is a person with a similar setup, except running 3.95GHz instead of 4.2GHz and ram running at 3200CL14 (likely stock XMP) (85.8ns latency)
Part of the difference, about 20ns, is due to setting up the interleaving of the ram channels and ranks to lower the latency. When you select channel interleaving, 512b size, etc., it knocks off that amount. So, the difference between the chips most comparable to a 16-core mainstream is that. If you then look at what real time latency 2666CL19 has, you get 14.25ns, going against 3200CL14 giving 8.82ns or 3466CL14 at 8.12ns, which doesn't go into the other timings of the ram that effect the total memory latency.
So the settings for memory and the timings can easily add 10ns or more latency to the score, especially using a single channel, single rank dimm with no channel or rank interleaving, which would give the 20ns or so penalty I showed with my own rig and someone else's rig above. Put them together, it looks like you would be in the territory of these new chips, doesn't it?
That is why trying to read too much into one leaked benchmark regarding the latency will NOT give you a true picture of what is going on with the capabilities of the silicon. Engineering Samples created and tested 5 months before a product launches will not give you the final results you can expect. You don't know what is and isn't tuned yet, what all is being tested, etc. This is why I ignored anyone at the time the benches were making the rounds screaming their head off about this latency. The majority can be explained and shown to be similar to what current chips have, after making the above adjustments.
Here is that first one with the peak of 96.92ns (about 11ns slower than the 3200CL14 example of the 1950X)
Here is the bugged run with the rise too early in cache latency which hit 100ns
Also, with knowing the cache was separated from the memory speed, we have ZERO idea what they had the IF2 speed tuned to, meaning we do not know if the engineering sample had a purposely tuned down IF2 to test something else with the chips, trying to remove errors that can occur due to faster IF2 speeds that can cause cache errors or something similar.
So don't buy too heavily into writing off the chip on things easily explainable, especially if the rumor of officially supporting 3200MHz is true and that is achievable with CL14. Between that and interleaving, getting memory latency to the 60ns levels seems to be in the cards without too much effort.