Originally Posted by WannaBeOCer
AMD's 15% IPC increase over Zen+ and Intel's 18% IPC over SkyLake were mostly due to the node shrink. If it wasn't they'd both be pulling a nVidia and use the cheaper node to produce their chips.
An increase in IPC doesn't mean more heat.
6700k @ 4.2Ghz running OCCT uses 110w
7700k @ 4.5Ghz running OCCT uses 90w
9900k @ 4.5Ghz w/ 1.025v running Blender uses 113w
Here's an example of that trick I told Hwgeek: I ran my living room itx at 4.2ghz with xtu internal benchmark then cpuz benchmark, but it wasn't long enough so I stressed it for a while to get stable values. I then reduced the core clock to 3.3ghz and adjusted the volts to be the same and ran the xtu bench, then cpuz stress test. The xtu bench is all jumpy so I'll use the cpuz stress test numbers as an example.
Core tdp during 4.2Ghz stress was 85w, during 3.3Ghz was 68w. (85/68)*3.3=4.125 so apparently the chip is more efficient at 4.2ghz at the same volts, but not by much. The power scaling with instructions per second is near linear with the same cpu, software.
How does a higher ipc, with all else equal, resulting in higher ips consume power differently than higher clocks, with all else equal, resulting in higher ips? I've seen with my various atom mini pcs(z8300,z8500,n4100) that their performance scales close to linear with power. This 5775c test seems to agree. Maybe I'm missing something. It's easy to replicate with all sorts of software and hardware though. Makes the problem of making cpus go faster a little tougher if every instruction carries heat with it.
Intel has been getting more efficient over the years and AMD really has with Zen. And once they get the bugs ironed out 10nm should be able to efficiency away the extra heat from extra ips.