Overclock.net › Forums › Industry News › Hardware News › [PCGH] CPUs Games Benchmark while Applications Running in the Background
New Posts  All Forums:Forum Nav:

[PCGH] CPUs Games Benchmark while Applications Running in the Background - Page 10

post #91 of 97
Quote:
Originally Posted by Fortunex View Post

Is it not possible that you're the one who has misinterpreted the information?

I would not think that to be the case seeing how I have had discussions about the subject with developers in the past. If I misinterpreted something then I would like to think they would have pointed it out and corrected me during the discussions. I give up on masta squid or whatever because he takes on an arrogant tone as if he knows everything. So if anyone one with experience in developing 3d games and knows what draw calls are would like to point out a possible misinterpretation on my part, feel free.
Edited by computerparts - 7/14/13 at 6:13pm
post #92 of 97
Quote:
Originally Posted by computerparts View Post

I would not think that to be the case seeing how I have had discussions about the subject with developers in the past. If I misinterpreted something then I would like to think they would have pointed it out and corrected me during the discussions. I give up on masta squid or whatever because he takes an arrogant tone and thinks he knows everything. So if anyone one with experience in developing 3d games and knows what draw calls are would like to point out a possible misinterpretation on my part, feel free.
Don't see you backing anything up with anything other than "you are wrong".

For the millionth time. There is a reason this is SOP for this sort of benchmark.
Gran Turing(0)
(14 items)
 
  
CPUMotherboardGraphicsRAM
i7 3770K Asus Sabertooth Z77 EVGA GTX Titan SC 16GB G.SKILL Sniper Gaming Series 
Hard DriveHard DriveOptical DriveCooling
WD Blue 1TB WD Blue 640GB LG 22X DVD±R DVD Burner Noctua NH-D14 
OSMonitorKeyboardPower
Windows 7 Ultimate x64 3x Samsung Syncmaster S23A300B 1920x1080 LED IBM Model M Zalman ZM850-HP 
CaseMouse
Antec Twelve Hundred V1 Logitech G5 
  hide details  
Reply
Gran Turing(0)
(14 items)
 
  
CPUMotherboardGraphicsRAM
i7 3770K Asus Sabertooth Z77 EVGA GTX Titan SC 16GB G.SKILL Sniper Gaming Series 
Hard DriveHard DriveOptical DriveCooling
WD Blue 1TB WD Blue 640GB LG 22X DVD±R DVD Burner Noctua NH-D14 
OSMonitorKeyboardPower
Windows 7 Ultimate x64 3x Samsung Syncmaster S23A300B 1920x1080 LED IBM Model M Zalman ZM850-HP 
CaseMouse
Antec Twelve Hundred V1 Logitech G5 
  hide details  
Reply
post #93 of 97
Quote:
Originally Posted by Masta Squidge View Post

Don't see you backing anything up with anything other than "you are wrong".

For the millionth time. There is a reason this is SOP for this sort of benchmark.
I've found that the best way to deal with someone who is adamant about something is to just tell them they're correct and move on. Ultimately their opinion isn't going to change anything about how these benchmarks are done, that's all that matters tongue.gif


.
Edited by Tippy - 7/14/13 at 6:48pm
post #94 of 97
Quote:
Originally Posted by computerparts View Post

I would not think that to be the case seeing how I have had discussions about the subject with developers in the past. If I misinterpreted something then I would like to think they would have pointed it out and corrected me during the discussions. I give up on masta squid or whatever because he takes on an arrogant tone as if he knows everything. So if anyone one with experience in developing 3d games and knows what draw calls are would like to point out a possible misinterpretation on my part, feel free.

When you run a game, the CPU is the primary chip responsible for reading information from the RAM and hard drives, and running the software, any software for that matter; hopefully we can agree on this. The primary responsibility of the GPU is to render images based on the information it receives from the CPU. If the CPU says "Display perfect square", then the GPU goes about the process of calculating how best to display a perfect square, and then sends the completed information to the appropriate display, showing a perfect square on your monitor. (All hypothetical obviously, computers don't speak English.)

Now, when you shoot a gun (virtually), the CPU understands that clicking the mouse is an indication that "weapon fire" should begin taking place. The CPU deals with game information like damage, position, aim and other core game information. However the CPU also needs to send information to other devices to handle the remaining information; primarily audio and video. Now if there is no dedicated audio processor, the CPU will render all of the audio itself. If there is a dedicated audio processor, it begins feeding digital information about what sound files need to be played to the audio processor, which then decodes the information and sends it out the proper line. In this case the CPU isn't directly rendering the sound, but it is directly telling the audio processor what sound it should be rendering.

Similar to the audio processing is the video processing. The CPU will send digital information to the appropriate graphics processor; which could be an integrated graphics chip on the CPU, a chipset based graphics processor, or a dedicated card entirely. The CPU will be sending information such as what assets need to be displayed, where are they, how many are there, what post-processing effects need to be applied the image, etc. You get the idea. The GPU then, using this information, begins to build a frame. What it does during this stage is build exactly what the CPU just told it to. The CPU is saved tons of work but only knowing what is displayed, not how to render it; which is obviously the strength of the GPU with its hundreds of parallel processing cores.

You and Masta got into too much of an argument and I can see you guys weren't on the same page. The CPU doesn't literally render the image, compress it, and then just have the GPU extract and display it. The CPU just tells the GPU "There is a box over there" and the GPU then responds by rendering a box in that position with all of the appropriate textures, shaders and other effects.

Back to the point, by reducing the resolution a lot you are for all intents and purposes, making the GPU appear infinitely powerful to the CPU; the GPU can render everything faster than the CPU can send it (keeping in mind the CPU still needs to deal with core game information, as well as communicate with audio processors, RAM, drives and input devices). This is when you reach a CPU bottleneck. However, by increasing the resolution, the CPU can presumably send information faster than the GPU can render it, causing a GPU bottleneck.

By reducing the resolution, you can determine the fastest rate the CPU can send information to the GPU. This maximum rate of information will be directly proportional to the maximum computational speed of the CPU (theoretically, assuming a flawlessly multi-threaded application). This is why powerful processors are important in SLI/CFX configurations. By adding a second GPU, you not only increase the total amount of GPU processing power, but you increase the amount of GPUs that the CPU must communicate with (this is also why the SLI/CFX bridges are required; to allow the GPUs to communicate with each other, instead of having to communicate through the CPU).


That being said I know absolutely nothing about software or programming, and could be totally off my rocker here.
Edited by MightEMatt - 7/19/13 at 12:56pm
Desktop PC
(17 items)
 
File Server
(19 items)
 
 
CPUCPUMotherboardRAM
Intel Xeon E5-2690 Intel Xeon E5-2690 Intel S2600CP4 Samsung 64GB (8GBx8) 1600MHz ECC 
Hard DriveHard DriveHard DriveHard Drive
Toshiba 3TB 7200RPM Toshiba 3TB 7200RPM Toshiba 3TB 7200RPM Toshiba 3TB 7200RPM 
Hard DriveHard DriveHard DriveHard Drive
Hitachi 500GB 7200RPM Hitachi 500GB 7200RPM Micron M500 480GB Mushkin Eco3 480GB 
CoolingCoolingOSPower
Cooler Master Hyper 212x Cooler Master Hyper 212x unRAID Pro EVGA SuperNOVA 750 G2 
CaseOtherOther
Phanteks Enthoo Pro Supermicro AOC-SAS2LP-MV8 Cyberpower CP1350PFCLCD 
CPUMotherboardGraphicsRAM
Intel Ci7-2820QM Acer HMA51_HR Intel HD3000 Crucial 2x4GB 1333MHz 
Hard DriveOptical DriveOS
Crucial M500 480GB Matashita UJ8B0AW DVD-RAM Microsoft Windows 10 Home 
  hide details  
Reply
Desktop PC
(17 items)
 
File Server
(19 items)
 
 
CPUCPUMotherboardRAM
Intel Xeon E5-2690 Intel Xeon E5-2690 Intel S2600CP4 Samsung 64GB (8GBx8) 1600MHz ECC 
Hard DriveHard DriveHard DriveHard Drive
Toshiba 3TB 7200RPM Toshiba 3TB 7200RPM Toshiba 3TB 7200RPM Toshiba 3TB 7200RPM 
Hard DriveHard DriveHard DriveHard Drive
Hitachi 500GB 7200RPM Hitachi 500GB 7200RPM Micron M500 480GB Mushkin Eco3 480GB 
CoolingCoolingOSPower
Cooler Master Hyper 212x Cooler Master Hyper 212x unRAID Pro EVGA SuperNOVA 750 G2 
CaseOtherOther
Phanteks Enthoo Pro Supermicro AOC-SAS2LP-MV8 Cyberpower CP1350PFCLCD 
CPUMotherboardGraphicsRAM
Intel Ci7-2820QM Acer HMA51_HR Intel HD3000 Crucial 2x4GB 1333MHz 
Hard DriveOptical DriveOS
Crucial M500 480GB Matashita UJ8B0AW DVD-RAM Microsoft Windows 10 Home 
  hide details  
Reply
post #95 of 97
Quote:
Originally Posted by MightEMatt View Post

When you run a game, the CPU is the primary chip responsible for reading information from the RAM and hard drives, and running the software, any software for that matter; hopefully we can agree on this. The primary responsibility of the GPU is to render images based on the information it receives from the CPU. If the CPU says "Display perfect square", then the GPU goes about the process of calculating how best to display a perfect square, and then sends the completed information to the appropriate display, showing a perfect square on your monitor. (All hypothetical obviously, computers don't speak English.)

Now, when you shoot a gun (virtually), the CPU understands that clicking the mouse is an indication that "weapon fire" should begin taking place. The CPU deals with game information like damage, position, aim and other core game information. However the CPU also needs to send information to other devices to handle the remaining information; primarily audio and video. Now if there is no dedicated audio processor, the CPU will render all of the audio itself. If there is a dedicated audio processor, it begins feeding digital information about what sound files need to be played to the audio processor, which then decodes the information and sends it out the proper line. In this case the CPU isn't directly rendering the sound, but it is directly telling the audio processor what sound it should be rendering.

Similar to the audio processing is the video processing. The CPU will send digital information to the appropriate graphics processor; which could be an integrated graphics chip on the CPU, a chipset based graphics processor, or a dedicated card entirely. The CPU will be sending information such as what assets need to be displayed, where are they, how many are there, what post-processing effects need to be applied the image, etc. You get the idea. The GPU then, using this information, begins to build a frame. What it does during this stage is build exactly what the CPU just told it to. The CPU is saved tons of work but only knowing what is displayed, not how to render it; which is obviously the strength of the GPU with its hundreds of parallel processing cores.

You and Masta got into too much of an argument and I can see you guys weren't on the same page. The CPU doesn't literally render the image, compress it, and then just have the GPU extract and display it. The CPU just tells the GPU "There is a box over there" and the GPU then responds by rendering a box in that position with all of the appropriate textures, shaders and other effects.

Back to the point, by reducing the resolution a lot you are for all intents and purposes, making the GPU appear infinitely powerful to the CPU; the GPU can render everything faster than the CPU can send it (keeping in mind the CPU still needs to deal with core game information, as well as communicate with audio processors, RAM, drives and input devices). This is when you reach a CPU bottleneck. However, by increasing the resolution, the CPU can presumably send information faster than the GPU can render it, causing a GPU bottleneck.

By reducing the resolution, you can determine the fastest rate the CPU can send information to the GPU. This maximum rate of information will be directly proportional to the maximum computational speed of the CPU (theoretically, assuming a flawlessly multi-threaded application). This is why powerful processors are important in SLI/CFX configurations. By adding a second GPU, you not only increase the total amount of GPU processing power, but you increase the amount of GPUs that the CPU must communicate with (this is also why the SLI/CFX bridges are required; to allow the GPUs to communicate with each other, instead of having to communicate through the CPU).


That being said I know absolutely nothing about software or programming, and could be totally off my rocker here.

That was exactly the point I was trying to get across to masta. The cpu doesn't build and send frames, it sends information and the gpu renders that information. Thanks for the detailed explanation on the gpu bottleneck. It does make sense when you think of it like that.
post #96 of 97
Seems like the 4770K does really well.
post #97 of 97
Quote:
Originally Posted by computerparts View Post

That was exactly the point I was trying to get across to masta. The cpu doesn't build and send frames, it sends information and the gpu renders that information. Thanks for the detailed explanation on the gpu bottleneck. It does make sense when you think of it like that.
I think your problem is that you are taking the word build too literally.

You keep associating the word "build" with the word "render".

The CPU is literally assembling the data that is being sent to the GPU for every frame. Last I checked, build and assemble are synonyms. He hasn't explained anything that I haven't. And it certainly isn't the point you were trying to get across, because he just agreed with me. You were sitting here saying the CPU doesn't have anything to do with the GPU nor how fast it renders frames.

Except that isn't the case. The CPU has a huge impact on framerates, and that is what this article is testing and why it is being done with low settings. If you turn the settings up too high, you are potentially limiting the ability for the GPU to RECIEVE frames, and therefore the ability for the CPU to send them as well.

What is being tested is how fast the CPU can "build" or "assemble" or otherwise package the data the GPU needs. I don't know why you can't grasp that simple concept.
Gran Turing(0)
(14 items)
 
  
CPUMotherboardGraphicsRAM
i7 3770K Asus Sabertooth Z77 EVGA GTX Titan SC 16GB G.SKILL Sniper Gaming Series 
Hard DriveHard DriveOptical DriveCooling
WD Blue 1TB WD Blue 640GB LG 22X DVD±R DVD Burner Noctua NH-D14 
OSMonitorKeyboardPower
Windows 7 Ultimate x64 3x Samsung Syncmaster S23A300B 1920x1080 LED IBM Model M Zalman ZM850-HP 
CaseMouse
Antec Twelve Hundred V1 Logitech G5 
  hide details  
Reply
Gran Turing(0)
(14 items)
 
  
CPUMotherboardGraphicsRAM
i7 3770K Asus Sabertooth Z77 EVGA GTX Titan SC 16GB G.SKILL Sniper Gaming Series 
Hard DriveHard DriveOptical DriveCooling
WD Blue 1TB WD Blue 640GB LG 22X DVD±R DVD Burner Noctua NH-D14 
OSMonitorKeyboardPower
Windows 7 Ultimate x64 3x Samsung Syncmaster S23A300B 1920x1080 LED IBM Model M Zalman ZM850-HP 
CaseMouse
Antec Twelve Hundred V1 Logitech G5 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Hardware News
Overclock.net › Forums › Industry News › Hardware News › [PCGH] CPUs Games Benchmark while Applications Running in the Background