Overclock.net banner

X99 - 5960X@4.6 vs z170 - 6700k@4.8 - w/ 1080 Sli - 3440x1440

5K views 37 replies 20 participants last post by  duganator 
#1 ·
I've been increasingly more curious at how a decently overclocked X99 platform stacks up to the latest Z170 Skylake offerings. I've deiced to run some benchmarks over the course of the last few days. There are very few benchmarks available in 3440x1440. There are exactly 0 benchmarks that are available for a z170 platform that are in x16 sli across both GPU's, up until now that is. x8/x8 benchmarks on a z170 platform are next and if it's any indication on the previews I'm seeing it makes a difference.

Test System:
CPU - 6700K @ 4.8 / 5960X @ 4.6
Motherboard - Gigabyte z170 Gaming G1 / Gigabyte x99 UD3P
RAM - G.Skill Trident Z DDR4 3200mhz 4x8GB @ 14-14-14-34
Storage - Intel 750 1.2TB Pcie SSD
GPU - EVGA GTX 1080 Hybrid @ 2050 in sli (Both platforms at x16 across both GPU's thanks to the PLX chip on the z170 board). Using the HB bridge.
Drivers used were the 369.09 available as part of the Win 10 Anniversary update.

All tests ran at 3440x1440 and same settings (max preset including AA) across both platforms. All of the tests have been ran 3 times each and the results have been averaged for consistency. Since I found heaven/valley and ashes so difficult to parse, I've used the last out of three results to make sure that the gpus would be warmed up and at their lowest daily clocks.




Screen Size: 3440x1440
Screen Mode: Full Screen
DirectX Version: 11
Graphics Presets: Maximum
General
-Wet Surface Effects: Enabled
-Occlusion Culling: Disabled
-LOD on Distant Objects: Disabled
-Real-time Reflections: Highest Quality (DirectX 11 Only)
-Edge Smoothing (Anti-aliasing): FXAA
-Transparent Lighting Quality: High
-Grass Quality: High
-Background Tessellation: High Quality
-Water Tessellation: High Quality
Shadows
-Self: Display
-Other NPCs: Display
Shadow Quality
-LOD on Shadows: Disabled
-Shadow Resolution: High - 2048p
-Shadow Cascading: Best
-Shadow Softening: Strong
Texture Detail
-Texture Filtering: Anisotropic
-Anisotropic Filtering: x16
Movement Physics
-Self: Full
-Other NPCs: Full
Effects
-Limb Darkening: Enabled
-Radial Blur: Enabled
-Screen Space Ambient Occlusion: HBAO+: High Quality (DirectX 11 Only)
-Glare: Normal
Cinematic Cutscenes
-Depth of Field: Enabled





Options: Resolution: 3440 x 1440; Quality: Very High; SSAA: On; Texture filtering: AF 16X; Motion Blur: Normal; Tesselation: Very High; VSync: Off; Advanced PhysX: On;






Same settings used for this test as the DX11 test.









I'm posting this fairly late my local time, so there's a good chance I've messed something up or forgot about something. Any constructive criticism is welcome and as I'm not a professional reviewer.
 
See less See more
30
#3 ·
Quote:
Originally Posted by Kana Chan View Post

Is the Z170 able to go 16x/16x? The 5960X is 16x/16x.
No, Z170 does not support x16/x16. He's using a PLX chip which grants a x16 slot (usually used for three-way SLI) to use all 16 lanes of the chipset on one card and using SLI on the PLX chip slot. It's got latency issues and shouldn't be as nice an experience as normal SLI would be (which won't show up in benchmark numbers unless frametimes are checked).
Quote:
Originally Posted by axiumone View Post

RAM - G.Skill Trident Z DDR4 3200mhz 4x8GB @ 14-14-14-34
Is this running quad channel on X99 and dual channel on Z170? If it's dual channel on the X99 you should get quad channel just to see if it helps. Also, see if you can test Witcher 3.
 
#6 ·
Interesting. Does it use quad channel memory with x99?
 
#8 ·
Awesome results. Definitely proves Intel was upto something good with Z170! My upcoming 6700k should keep me quite happy for a good number of years.

It'd be interesting to see power consumption numbers too... gaming load and prime 95 load....
thumb.gif
 
#9 ·
Quote:
Originally Posted by Cakewalk_S View Post

Awesome results. Definitely proves Intel was upto something good with Z170! My upcoming 6700k should keep me quite happy for a good number of years.

It'd be interesting to see power consumption numbers too... gaming load and prime 95 load....
thumb.gif
It shows that like usual games are coded to utilize one or two cores which the 6700K have much faster single threaded performance. If you look at the games that use multithreaded coding in shadows of morder, metro, battlefield games etc it's different. It's possible with DX12/Vulkan your 6700K might date faster than you want it to. We will see how they design to code from here.
 
#10 ·
We need more people like you on OCN with real world tests and no incentives from the manufacturers!
 
#11 ·
Can you do a test with higher clocked ram/higher timing vs 3200 / CL14-14-14-34

Say OC'd to 3600 / 15-15-15-35 or 3840 / 16-16-16-36 or even 4133 / 19-21-21-41 for the Z170?

Is the 6700K already delidded or not delidded?

The 6700K seems to be around ~1.12x to 1.14x Haswell in some games if they were both clocked at around 4.6ghz

A lot of delidded 3xxx / 4xxx / 6xxx could hit 4.9/5.0 like the 2xxx chips ( soldered ). It's about equal to imperfect solder.

It seems like if you extrapolate to get 5.0ghz and 4133, it might be 1.10-1.14x vs the 5960x@4.6
 
#12 ·
Thanks for the response guys! I'm really glad it could help.
biggrin.gif


I definitely derped on describing the settings that were used for each game last night. The settings used between platforms were exactly the same, I just worded it poorly. Luckily I've saved screen shots from each games settings menu and I will edit them into the original post.
Quote:
Originally Posted by Kana Chan View Post

Is the Z170 able to go 16x/16x? The 5960X is 16x/16x.

You own both systems?

It seems to win in min fps compared to the the X99 while the max fps might favor the higher core count cpu.
I own both systems. I'd think that a higher average minimum frame rate would be a little bit more beneficial than a higher max average.

Quote:
Originally Posted by D2ultima View Post

No, Z170 does not support x16/x16. He's using a PLX chip which grants a x16 slot (usually used for three-way SLI) to use all 16 lanes of the chipset on one card and using SLI on the PLX chip slot. It's got latency issues and shouldn't be as nice an experience as normal SLI would be (which won't show up in benchmark numbers unless frametimes are checked).
Is this running quad channel on X99 and dual channel on Z170? If it's dual channel on the X99 you should get quad channel just to see if it helps. Also, see if you can test Witcher 3.
The plx bridge is a switch. It grants full x16 access from each gpu to the cpu at the expense of latency. Now, I've used plx bridges before and whatever latency is there it's not perceptible to the eye. Unfortunately I don't have the means to test actual frame time variance, but there was no visible microstutter going from z170 with plx to the x99 running x16 natively. It's worth mentioning that a user on these boards worked for a game streaming company - think onlive or something similar. He said that his company did very thorough tests involving plx bridges and the results were that the latest gen plx bridges added an insignificant amount of latency.

Ram was in quad channel on the x99 and dual on the z170.

Quote:
Originally Posted by Cakewalk_S View Post

Awesome results. Definitely proves Intel was upto something good with Z170! My upcoming 6700k should keep me quite happy for a good number of years.

It'd be interesting to see power consumption numbers too... gaming load and prime 95 load....
thumb.gif
Dang it! That's a great idea. I have a power meter handy too and I didn't think about it. I'll see if I can put something together in the near future.

Quote:
Originally Posted by Kana Chan View Post

Can you do a test with higher clocked ram/higher timing vs 3200 / CL14-14-14-34

Say OC'd to 3600 / 15-15-15-35 or 3840 / 16-16-16-36 or even 4133 / 19-21-21-41 for the Z170?

Is the 6700K already delidded or not delidded?

The 6700K seems to be around ~1.12x to 1.14x Haswell in some games if they were both clocked at around 4.6ghz

A lot of delidded 3xxx / 4xxx / 6xxx could hit 4.9/5.0 like the 2xxx chips ( soldered ). It's about equal to imperfect solder.
I'm not sure I'd be able to address ram overclocking at this point, but I'll see what I can do.

6700k is delidded. This particular chip is not a very good sample. In order to get to 4.8 stable I need to pump in 1.452v. I don't think any higher core clock is doable.
 
#14 ·
Quote:
A lot of delidded 3xxx / 4xxx / 6xxx could hit 4.9/5.0 like the 2xxx chips ( soldered ). It's about equal to imperfect solder.
It's absolutely not as simple as that, you're dealing with the silicon lottery as well as temperatures. My 6700k needs about 1.435v to be rock solid at 4.6ghz (and that isn't even prime stable), good luck getting that to 5.0.

Average 6700k will do a hair over 4.7ghz around that voltage, very very very few will do 5ghz even when they're at 50c under load. It was the same for Haswell.
Quote:
This particular chip is not a very good sample. In order to get to 4.8 stable I need to pump in 1.452v. I don't think any higher core clock is doable.
That's not really unusual, it's just not amazing. Certainly average-ish, maybe even above depending on how you tested.
 
#15 ·
Quote:
Originally Posted by Kana Chan View Post

Could you test single gpu by any chance? Maybe the 1080 clocked to 2100 if possible too?
Possibly at some point in the future. My 1080's wont reach 2100 though. Also, I think there are already a few benches available by better reviewers than me at 3440 / 6700k / gtx 1080.

I may be able to do something quick on the 6700k system if there's something specific you'd like to see.
 
#16 ·
Quote:
Originally Posted by axiumone View Post

The plx bridge is a switch. It grants full x16 access from each gpu to the cpu at the expense of latency. Now, I've used plx bridges before and whatever latency is there it's not perceptible to the eye. Unfortunately I don't have the means to test actual frame time variance, but there was no visible microstutter going from z170 with plx to the x99 running x16 natively. It's worth mentioning that a user on these boards worked for a game streaming company - think onlive or something similar. He said that his company did very thorough tests involving plx bridges and the results were that the latest gen plx bridges added an insignificant amount of latency.

Ram was in quad channel on the x99 and dual on the z170.
Dang it! That's a great idea. I have a power meter handy too and I didn't think about it. I'll see if I can put something together in the near future.
I'm not sure I'd be able to address ram overclocking at this point, but I'll see what I can do.

6700k is delidded. This particular chip is not a very good sample. In order to get to 4.8 stable I need to pump in 1.452v. I don't think any higher core clock is doable.
Unfortunately this is something I'll more have to adopt a "wait and see" policy for myself about. Though it's possible much of the latency from before was purely because three-way SLI rather than directly the PLX chip, so I don't know.

The RAM in quad channel for the X99 seems odd... that is REALLY good latencies for DDR4 for that RAM. Usually lower latencies happen for Dual Channel configs. I just would like to be sure it's in quad channel at that speed at those latencies for certain is all.
 
#17 ·
Quote:
Originally Posted by D2ultima View Post

Unfortunately this is something I'll more have to adopt a "wait and see" policy for myself about. Though it's possible much of the latency from before was purely because three-way SLI rather than directly the PLX chip, so I don't know.

The RAM in quad channel for the X99 seems odd... that is REALLY good latencies for DDR4 for that RAM. Usually lower latencies happen for Dual Channel configs. I just would like to be sure it's in quad channel at that speed at those latencies for certain is all.
That's fine, skepticism is healthy. You may be waiting for a very long time. PLX enabled boards have been out for a very long time and I haven't seen a major publication test anyone of them in sli, ever. I have used a multitude of them first hand in three/four way sli and crossfire configurations. Whatever issues were there, were the fault of more than two card support that was poorly implemented.

As for the ram, I don't have a cpu-z shot of that board handy, but I found something else. It's a screen shot from one of my older rigs with the same cpu and ram kit, just a different x99 board.

 
#19 ·
#20 ·
#22 ·
Thanks again + rep
wink.gif


the 6700k is the king for games.. I just switch from x99 to 6700k again!

the slow boot time and the memory problem's on the x99 is very bad... its not worth your time..
 
#23 ·
Quote:
Originally Posted by Mr-Dark View Post

Thanks again + rep
wink.gif


the 6700k is the king for games.. I just switch from x99 to 6700k again!

the slow boot time and the memory problem's on the x99 is very bad... its not worth your time..
Glad I could help!
thumb.gif


edit1 -Holy smokes, I just noticed I used the same slide for Tomb Raider dx11 and dx12. Will update as soon as I get home.

edit2 - RoTR benchmarks have the correct slides now.
 
#25 ·
Subbed.

Damn, that sucks. I thought the X99 was heckofalot better. Esp. in games like GTA and BF4. (Though not tested BF4 here).

Might get a kaby lake CPU and mobo once it releases, delid and run WB direct to IHS:
tongue.gif
 
#26 ·
Quote:
Originally Posted by GreedyMuffin View Post

Subbed.

Damn, that sucks. I thought the X99 was heckofalot better. Esp. in games like GTA and BF4. (Though not tested BF4 here).

Might get a kaby lake CPU and mobo once it releases, delid and run WB direct to IHS:
tongue.gif
People often misinterpret the 5960x imo it's kind of the jack of all trades CPU

It's absolutely great for gaming just not the best of the best. You take into the fact that it's not meant for that...are you a gamer who also edits, encodes, renders, streams etc? Then this is for you. Are you purely a gamer that uses the computer for general things otherwise 6700k (6600k imo) is right for you.
 
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top