Originally Posted by ComputerRestore
From what I've seen of current games that use more than 4 Cores, it still wont make a huge difference. AMD will still need to improve on their single core performance to be competitive Apples to Apples.
i.e. - 25% CPU usage over 8 Cores vs 50% CPU usage over 4 Cores - Intel still has stronger per core performance so as long as it has CPU headroom it will be better.
CPU performance doesn't make much of a difference in most games anyway, I went from an FX-4170 to an i5 3570k and most games still play equally as well.
I have zero idea why they're getting such a big difference..I didn't see that on my FX vs my i5 at equal clocks.
There was a difference of like 5fps, but it was just as playable and fluid on both platforms.
10-13 of those benchmarks are completely useless in the real world depending on if you game at 1024x768 or not...And are mostly won by Intel.
That is, unless real world consists of running tools that have perfectly working multi-threaded code in a single-thread and Sysmark, take that into account and the FX-8350 didn't too bad for a CPU that is usually $75 cheaper or so
. Hell, if you're running Linux stuff the i7 3770k and FX-8350 are even closer due to ICC not being the main compilation option on Linux.
Originally Posted by kingduqc
Intel do just fine in any type of game and AMD only in multi threaded games.
AMD does just as well as Intel in most games, short of SC2 you likely won't notice a difference.
And it's funny. a 4.5Ghz i5 3570k cannot max out Sins of a Solar Empire...from 2008. It literally still
lags when you've got big battles going.
Originally Posted by vampirr
I thought Crysis 3 can use more than 4 cores ;(
It can, 1.3 fixed HT so now the i7 3770k can use its extra threads...Of course it's ahead of the FX there, the FX competes with the i5 in price and performance while coming up to the i7 sometimes.
Originally Posted by Kane2207
Steamroller beating Haswell is a very bold claim indeed
It all depends on clocks, if Haswell doesn't clock too much better than Ivy (Doubtful from what we've heard in some reviews, likely according to others) and SR OCs as well or better than PD, it might come up close to the Haswell i5 in some situations (ie 7zip, x264 encoding, etc) especially considering that the lack of a shared decoder should be a decent performance increase. (1c per module performance on an FX-8150 bumped up performance 9%-40% depending on the benchmark used, but that's the cache too)
Originally Posted by KloudZero
In these same games I often see threads on their respective forums about both reduced performance or outright incompatibilities from both AMD's processor and GPUs products.
That is completely and utterly false, I've never seen something (recently, anyway) that had a bug or problem on AMD but not Intel...and as for GPUs, nVidia's drivers really aren't any better outside of SLI latency.
Originally Posted by Maiky
Bahh, countering with a game where the AMD 8 core is clocked at 4Ghz while the Intel's are at 3.4 and 3.5Ghz at the most is just nonsense. Put all CPU's at 4.5Ghz and we will see who comes out ahead and by what margin.
The 2500K alone which is a 2.5 year old i5 will destroy an 8350 in any game when having both CPU's at the same exact clock. Under-clock that 8350 to 3.3Ghz and watch how it drops from 3rd to 10th place on that list. Lets compare apples to apples.
AMD needs a rabit.
Did you completely ignore that FX-6300 in there? 3.5Ghz and just under the 3.4Ghz i7 3770k.
Originally Posted by A Bad Day
Why tinker with clock rates? Shouldn't it be performance per watt/price?
AMD said that they implemented Clock Gate Mesh technology, which allows them to increase the clock rate without increasing power consumption.
IPC and clock rates don't matter. I
econds is what we should be using instead of IPC.