Test System

[CPU] Intel Core i7 4790K
[GPU] Gigabyte GTX 970 mini-ITX
[MOB] Gigabyte Z97-D3H (F7 bios)
[RAM] G. Skill Trident X 4GBx2 DDR3 @ 2400 MHz 10-12-12-31 2T
[HSF] Noctua NH-U12S

Windows 7 64-bit with Service Pack 1
GeForce 355.82

CPU's Tested

Core i5 and Core i3 were all simulated using Core i7. Hyper Threading and active core count was configured in the BIOS. Turbo was disabled for all CPU configurations.

Core i5 - Core i7 with Hyper Threading disabled
Core i3 - Core i7 with Hyper Threading enabled but 2 cores disabled

Take note that Core i3 4160K is just a theoretical unlocked Core i3. I just included it due to the news that there will be a Core i3 6320 3.9 GHz. It's a Skylake CPU so I bumped the speed of Core i3 4160K to 4.2 GHz to take into account the performance advantage of Skylake over Haswell in a clock-for-clock comparison. Core i5 4590 has a base clock speed of 3.3 GHz but it boosts up to 3.5 GHz when all 4 cores are loaded.

Core i7 4790K @ 4.4 GHz
4 cores / 8 threads

Core i5 4690K @ 4.4 GHz
4 cores / 4 threads

Core i5 4590 @ 3.5 GHz
4 cores / 4 threads

Core i3 4160K @ 4.2 GHz
2 cores / 4 threads

Test Methodology

Gaming performance will be measured by using FRAPS to record the frame rate and frame time. For those who are new to frame times, it is the measure of time taken to render a frame and its unit is milliseconds (ms). The lower the frame time, the better. The 99th percentile frame time is a measure of the overall "smoothness" of game play. For example, a 99th percentile frame time of 20ms means that 99% of the frames were rendered within 20ms. A more detailed explanation of frame time benchmarking can be found here http://techreport.com/review/21516/inside-the-second-a-new-look-at-game-benchmarking

For those who want to replicate my gaming benchmarks, check the YouTube links below to see what part of the game I benchmarked. The settings used for each game are also indicated below. All the games are updated to the latest version as of September 09, 2015.

The Witcher 3 https://www.youtube.com/watch?v=w_a-r8HMeqU
Resolution: 1920 x 1080
HairWorks: Off
Ambient Occlusion: SSAO
Every graphics option was set to the highest setting available except for Shadow Quality and Foliage Visibility Range which will be both set only to High
The following graphics option were disabled: Blur, Motion Blur, and Chromatic Aberration

Grand Theft Auto V https://www.youtube.com/watch?v=7WTKhecDesA
Resolution: 1920 x 1080
FXAA: On
MSAA: 2x
Every graphics option was set to the highest setting available except for Grass Quality which was set only to High

Crysis 3 https://www.youtube.com/watch?v=aX1S4aSJ3lU
Resolution: 1920 x 1080
Texture Resolution: Very High
Anti-aliasing: SMAA T2X
System Spec: Very High

Performance Results

The results below are the average of 3 runs

[IMG


Looking only at the average frame rate and minimum frame rate, it seems that Core i5 and Core i7 do not offer much improvement over Core i3. But, as shown by the 99th percentile frame time results, there is a clear and very big performance gap between Core i3 and Core i5. Despite the 700 MHz difference in clock speed, Core i5 offers a better gaming experience compared to Core i3. So, it's not true that a highly clocked Core i3 will be a better gaming CPU than any Core i5. A quad-core CPU will always be better gaming option than a dual-core CPU with Hyper Threading.

Overclocking also helps reduce the CPU bottleneck which is very evident in Crysis 3. Though not obvious if you just look at the frame rate, Hyper Threading gives Core i7 a big advantage over Core i5 when it comes to gaming experience. Game play is much "smoother" with Core i7 and this is shown by the 99th percentile frame times in Crysis 3 and in Witcher 3. If you are building a gaming PC with a high-end GPU, don't ever doubt getting a Core i7.

Some may criticize my selection of games especially Crysis 3. Not all games have a CPU-intensive part. But if you will build a high-end gaming PC, it's better to have a CPU that is prepared for the worst-case scenario rather than skimp on the CPU now then later on realize that you need a CPU upgrade. I believe upcoming games will be more demanding. Even if DirectX 12 delivers on promise on performance improvement in CPU-bottlenecked situations, a Core i5 or a Core i7 will still offer more performance than Core i3 if the game developer will really take advantage of the CPU's potential.

== == == ==
== == == ==

Is CPU simulation a valid approach ?

Some may question the validity of simulating CPU's. Here is a video https://www.youtube.com/watch?v=0PhlwNfIUUE by Linus Tech Tips comparing the performance difference between a real CPU and a simulated CPU. A Core i7 5960X was used to simulate Core i5 4670K and Pentium G3258 by configuring core count, clock speed, and Hyper Threading in the BIOS. Then, the simulated CPU's were compared to a real Core i5 4670K and a real Pentium G3258.

The difference with a simulated CPU and a real CPU is the L3 cache size since it does not change even if CPU cores are disabled

Simulated Core i5 4670K - 20 MB L3 cache
Real Core i5 4670K - 6 MB L3 cache
Simulated Pentium G3258 - 20 MB L3 cache
Real Pentium G3258 - 2 MB L3 cache

Based on Linus Tech Tips' results, the simulated Core i5 4670K and real Core i5 4670K have almost the same performance. However, when it comes to the Pentium G3258, the simulated one was about 10% faster than the real one. This is probably caused by the huge difference in L3 cache size, 20 MB vs 2 MB.

For my tests, here are the L3 cache comparisons between the simulated CPU and real CPU

Simulated Core i5 4690K - 8 MB L3 cache
Real Core i5 4690K - 6 MB L3 cache
Simulated Core i3 4160 - 8 MB L3 cache
Real Core i3 4160 - 3 MB L3 cache

While I don't have a real Core i3 4160, I think the simulated one will be about 5 to 10% faster than the real one. This doesn't really invalidate my conlusion about the Core i3 4160 or any Core i3 Skylake CPU. Why ? Because if the simulated Core i3, which is faster than a real one, is a bottleneck to a GTX 970, then it follows that a real Core i3 would also be a bottleneck to a GTX 970.

By the way, here is an L3 cache comparison test done by DG Lee http://www.iyd.kr/695