Originally Posted by sumitlian
I am still confuse in FX's 1 FPU 1 Integer core system. Suppose in a module 1 where there are core #0 and core #1 and core #0 is used by a single thread application which is dispatching almost all floating point problem.
In this condition, is the core #1 really available to be freely utilized by an another app for integer workload ?
If core #1 is loaded by some integer workload, will it share the cycle frequency from core #0 or will it run with its own cycle (retaining the performance of core #0 to max) ?
Yes, it's my understanding that even if one core is fully saturating the FPU, the other's integer resources are untouched.
I'm not 100% certain on the frequencies of each core, but I would assume they have to be the same as they share an L2 cache. I'd have to read up more on AMD's Turbo Core to be sure.
Originally Posted by Seronx
Bulldozer and Piledriver both have the same number of pipeline stages which are fifteen.
A 20-cycle misprediction penalty (the only firm figure I can find) implies something longer than 15 stages.
Core 2, Nehalem, and Bobcat/Jaguar all seem to have misprediction penalties one cycle longer than their number of pipeline stages, while Sandy, Ivy, and Haswell some times have less than this.
Originally Posted by Brutuz
Nevertheless, you can see
the difference by comparing the chips on Linux...AMD gets a much bigger boost than Intel, benchmarks show the FX being much more competitive with the i7 on Linux.
Most of the benchmarks I've seen, out side of a few isolated examples, don't look all that different in relative performance.
Can you give some specific examples? Preferably ones where the Windows equivalents are not faster than the Linux programs on the AMD part. Bias means little if the biased program is still faster on the parts it's biased against.
Originally Posted by mushroomboy
The likelihood of it using ICC is going to be very slim. MS does their own compiler for the xbox, I'm sure of that. I'm also guessing the PS4 is using a tweaked version of that, due to the ability for the dev kits to take DX11 code. It makes more sense for Microsoft to internalize that and hack up a compiler to take DX11 compliant programming and built it to OGL instead of letting the internals of their baby DX go to unknowing eyes.
So really, this ICC crap is a bunch of bull when it comes to the console itself. Now the question is, would they still use it on the PC? Some people might program for OGL on the PS4. In turn, they may not use any MS suited software. That's doubtful though but the more likely scenario, where PS4 "ports" to the PC get jewed.
I would expect non-half-assed ports to be recompiled for PC architectures, possibly with whichever compilers were fastest for the target systems.Edited by Blameless - 8/13/13 at 4:35pm