Overclock.net banner

AMD A10-5700 Reviewed: Framerates, Frametimes, and Playability. Uploading 13.2b4 stock!

15K views 84 replies 19 participants last post by  Heavy MG  
#1 ·
Hello!

I stumbled across Benchmark3D's post about frametime measuring and if framerate really reflects playability, and decided to give that testing method a go, and use it as an excuse to review my A10-5700's gaming performance.

So, let me elaborate:

The purpose of this thread is to analyze if AMD's new generation of Trinity APUs are really usable to play games. And I don't mean if they are capable of producing playable framerates, but if they can kick playable frametimes.

Because it's absolutely worthless to get 60 FPS if you get one every 10 frames over 500ms, ans you will get an unplayable stuttering.
thumb.gif


Why do I do this? Well, because nobody has yet tested AMD's APUs from the frametime standpoint, something I consider very important since they are extremely bandwidth bound, as they use the system memory as a framebuffer.

Without further delays, this is my testing setup:

  • AMD A10-5700
  • Gigabyte F2A75M-HD2
  • 2x4GB GSKILL Ares
  • Hitachi 5K750 500GB 2.5"

And these are the parameters each component is set at:

  • CPU: 3.4GHz @1.12V, Turbo and APM disabled. C1, C6 and CnQ enabled. Why turbo disabled? Because I want to
    smile.gif
  • GPU: 760MHz.
  • Memory settings: DDR3-1866, 8-10-9-27-36 2T 1.58V. Can go tighter, but doesn't make much of a difference (since it's 8-9-8).
  • Monitor: 1920x1080, 60Hz.

As a clarifying note, some people have asked why have I disabled Turbo core. Well these are the reasons.
-Increases CPU power consumption around 20W more that my custom settings under heavy stress.
-Makes the system shut down after a while (PSU overheating).
-Only improves performance around a 5%.

I have also tried to overclock the processor to 3.5GHz across all cores, and it increases power consumption by a 10% (from 60 to 67W) in 3DMark Vantage, all of this while marking a 1% performance improvement (from P5815 to P5860), something I could achieve by increasing the GPU clocks 10MHz. Absolutely not worth it, and even less on a power constrained platform such as mine.

Now to the actual tests!
wheee.gif

DiRT2:

Testing here is done in two situations that make different usage of the GPU.
1st scenario is in Morocco, on a Rally event. Second one is on Japan, in a Rallycross event.
Why this distinction? Because Morocco makes extensive use of vegetation in the ground, and there are lots of cast shadows by buildings. And Japan, because it's a urban RX track with seven other racers, and lots of changing surfaces that make extensive use of physics calculations.

Everything set to high except postprocessing, which is at Medium. No AA, 8X anisotropic filtering.

Morocco:



Japan:



The Elder Scrolls IV: Skryim:

Testing is done inside Ilinalta's Deep (a necromancer cave), because it features lots of water and lots of particles. I also played my mage, so I'm making extensive use of fire. Testing here is done setting manual core affinity to the 1st thread from the second module, as it is proven it gives the smoothest experience.

Settings as follows:





Just Cause 2:

Using the built-in benchmark (Concrete Jungle), nothing too fancy.
Settings as follows:





As you can see it has super high stuttering; and after trying to get adequate framerates, I concluded that even setting everything to Low does not help, framerates stay the same. There is definitely something strange with this game. Note that I can play just fine, for some reason.

GTAIV: Episodes from Liberty City:

Again, the bundled benchmark tool from The Lost and Damned.
Settings as follows:





Mafia 2:

And once more, the benchmarking tool.
These are the settings:





The peaks from the 1st third of the test correspond to mini freezes, that for some reason happen in the benchmark but I've never experienced ingame.

Guild Wars 2:

Taking a stroll around Plains of Ashford, killing some enemies here and there.
This is how it's set up:





Despite the very apparent stuttering, the game was playing very smooth. Seems like frametimes, while having lots of variations, never take long enough to notice a stutter.

Planetside 2:

Simulating a field fight, lots of vehicles involved, some infantry, taking a couple of scoped shots. Playing as a NC Infiltrator, CQ sniping.
Render quality at 100%, everything else at flat out Low. Either way it is unplayable, thanks to stupid SOE who decided that forcing Antialiasing was a good idea.



Sniper Elite V2:

The benchmark tool, too.
Configured like this:





Team Fortress 2:

Everything at flat out maximum, including Antialiasing and anisotropic filtering.



It simply tears through this game like a hot knife through butter. That one spike corresponds to a death
tongue.gif


Far Cry 3:

Settings bottomed down. Nothing else. Ran at DX11 and DX9, for comparison:

DirectX 9:



DirectX 11:



Performance is less than stellar (albeit arguably playable depending on your style), and ironically, it only gets worse with DX9. The game also looks worse, so there's no reason as to why someone would want to use the DX9 mode.

Chivalry: Medieval Warfare

Testing was done against 7 bots in a TDM match in Throneroom, as I find it's a rather nice scenario. Framerates in Arena will be higher, lower in some huge Team Objective maps.

Settings as follows:





There is A LOT of stuttering, but overall the game is pretty playable. You do notice the stutter though.

And here it is with MUCH higher settings, needless to say it's night and day in looks.





Borderline playable, but still quite there. Logically, the stuttering is also quite higher.

3DMark Vantage:

Nothing too fancy, Performance preset, bare bones:



Not too bad! No frametime tests here, as I did this test for rough score, and FRAPS lowers it.

And there we go, test with Catalyst 13.2 Beta 4!
wheee.gif


As to why have I switched to graph view, instead of bar view like before? Well because it lets you see better the actual frametimes. If you look closely in the bar graphs, it's also visible, but it's clearer in the line.
DiRT2:

Morocco:



Japan:



DiRT2 once again shows its love for the APU. Framerates stay the same too. Exact same stuttering, aka none.

The Elder Scrolls IV: Skryim:



Same performance. Nothing to see here
redface.gif


Just Cause 2:



Still dog poo. Although there is a welcome 7% framerate improvement. Translates in a 18.6 to 19.9 increase. Not a lot, but welcome. Also spikes like a fence.

GTAIV: Episodes from Liberty City:

Mafia 2:

Guild Wars 2:

Planetside 2:

Sniper Elite V2:



Framerates go slightly down, stuttering goes slightly up. Something isn't playing nicely here.

In conclusion, you can see that if you're not too picky about framerates (I know there's people who cannot stand less than 60FPS no matter what), Trinity plays just fine, with results varying with each game. Some can be straight maxed out, some have to be bottomed down in order to play. But so far no game I've played has been straight out unplayable.

Frametimes are, as you have seen, very variable, something I bet has to do with memory bandwidth. As you overclock framerates improve, but frametime differences stay the same, and become worse in some cases, better in others (games that are power starved, and not bandwidth starved).

Overall, I think you cannot ask more out of a platform that costs 250€ complete (board, processor and RAM), and I am pretty surprised that it is actually capable of managing such a performance.

So far, this is it
smile.gif
I will be adding more benchmarks as I fancy, but Metro 2033, Far Cry 3 and Crysis Warhead will get eventually done. I have to rip my DVDs to ISOs and transfer them to the Trinity, as I've got no ODD.

I take requests, but please note that I do not own most games out there, so there is little I can do in that aspect.

On the next post is the overclocking results!

Thanks for reading folks
wink.gif
 
#2 ·
So, as I can guess, overclocking results are something you mates would be interested in... After all this is an overclocking community!
thumb.gif


These are the current settings:

CPU: 3.36GHz, 113MHz BCLK, 1.8GHz NB link. 1.12V
GPU: 851MHz, stock voltage.
Memory: 2096MHz, CL9-11-9-28 1T, 1.67V.

Some CPUZ and GPUZ captures...



I'll be dropping here something I thought might be interesting to some. MaxxMEM2 runs at both stock and overclocked, so you see how the bandwidth and latencies actually improve!
Stock:



Overclocked:

And this is at 2100MHz CL9-11-10. Actual testing has been done at 9-11-9, but the scores are almost identical.


And onto the testing! Uploading as I get them done.

The display settings are absolutely identical to the ones above, so the comparison is 1:1. The situations are also identical, or at least, similar (in the cases where gameplay is involved, such as DiRT2, Skyrim, GW2, and Planetside 2).

DiRT 2:

Morocco:



Japan:



Frametimes in DiRT2 have improved by a nice margin, stuttering has also gone down a bit. This game definitely likes the improved memory bandwidth!

The Elder Scrolls V: Skyrim:



Skyrim suffers from a whole different story. Framerates improve by a slight margin, but stuttering increases. Not by much, but enough to have a handful of situations where you can notice micro-freezes. Will look into this some other time.

Just Cause 2:



You can see most of the stuttering is gone, but there are still noticeable spikes every few seconds. Framerates have also improved by a nice margin.

GTA IV: Episodes from Liberty City:



There has been a slight improvement in TLAD, around the 5% framerate improvement, and some stuttering has gone away. The main peaks are still there, so the cause for it has to be found somewhere away from the IGP.

Mafia II:



There has been a slight improvement in framerates, but stuttering is, just like it was, relatively low. There are still apparent frame latency peaks, and that very noticeable freeze at the beginning of the benchmark.

Guild Wars 2:



Gameplay takes place in the very same area, killing the same mobs. But since there can be variations, don't analyse the graphs 1:1 to the above, but rather as an overall.

The framerate improvement is very large (in the boundaries of the 20%), and the stuttering, almost completely gone. The variations on the framerates mostly respond to sudden camera rotations, skill activations and so on.

Planetside 2:



In Planetside, a considerable framerate improvement is seen, as well as an overall reduction in stuttering. Mind you this second pass was done on a more demanding environment, an Amp station as opposed to a field; so the improvement is actually higher than what is reflected here.

Sniper Elite V2:



Sniper Elite V2 sees a huge drop in overall stuttering, but a rather low improvement in overall framerates. Still, the game becomes way more playable since stuttering now is almost unnoticeable. It seems apparent that bandwidth was the cause of the stuttering, although the GPU is responsible for the not too stellar framerates.

Team Fortress 2:


Settings ALL the way up. As easy as that. AA to 8X, AF to 16X, everything is at its absolute maximum.



TF2 is like a walk in the park for the IGP. It kicks beastly framerates and absolutely zero stuttering. That one 35FPS dip corresponds to a death,
tongue.gif


Far Cry 3:

On Far Cry 3 I've found that the only settings that are something close to playable are flat out lowest. I kept the resolution at 1920x1080, though. DirectX9 and DirectX11 have been tested:

DirectX 9:



DirectX 11:



As you can see, there's flat out no reason as the game shouldn't be ran on DX11. Framerates are for the most part identical, and even though there's more latency spikes on DX11, they don't disrupt the gaming as much as one would think. The game does look prettier, take that for granted.

None of the options offer overly playable solutions, albeit framerates stay close to 30 FPS. With a corded, proper mouse the game is much more playable than on the wireless Logitech I'm using. Blame it on added latency from the link, on top of the already low framerates.

Nonetheless, I have been able to play for half an hour without much hiccups besides framerates dropping to low 20s under heavy action; so I was forced to hit, hide, hit, hide.

Chivalry: Medieval Warfare



Stuttering hasn't improved (it appears worse because I had more fights this second round), but framerates have gone up... By a 15%! The game is noticeably more playable. Unreal likes memory bandwidth, it seems.

And re-tested again with the same higher settings as above.



Not quite as smooth as before, but it's perfectly playable, and it looks far better. Stuttering is a bit higher, but not a lot.

3DMark Vantage:

Same situation as before. Performance preset, bare stock.



I see a major improvement here! I'm getting a 10% performance boost out of the blue. This is a very clear and contrastable index that performance does indeed benefit a lot from RAM overclocking.
DiRT2:

Japan:

The Elder Scrolls IV: Skryim:

Just Cause 2:

GTAIV: Episodes from Liberty City:

Mafia 2:

Guild Wars 2:

Planetside 2:

Sniper Elite V2:

So, as you can see, overclocking the memory DOES help with stuttering in every single case (except Skyrim), and I also experienced a framerate improvement in every case, though largely variable depending on the game.

Please note that in this scenario I used the BCLK to boost the memory speed, and that resulted in an IGP speed boost, too. So, these results are the combination of both.

Hope it was interesting, and if you have any questions, suggestions, or petitions, please don't hesitate to say so!
smile.gif


~Artik.
 
#4 ·
Awesome read, and it's not an A10, but I talked my GF into getting an A8 (7640G, I think), and I can play most games at descent settings. I was playing NFS Shift yesterday maxed out (only 2x AA and 4x AF though) at her laptops native res of 1366x768 with zero stuttering, and great frames. I didn't use FRAPS, but I was a good ways above 30 FPS, probably in the 45-70 FPS area. Kickass for a sub-$500 laptop if you ask me...I want one now too lol
 
#5 ·
Quote:
Originally Posted by Aaron_Henderson View Post

Awesome read, and it's not an A10, but I talked my GF into getting an A8 (7640G, I think), and I can play most games at descent settings. I was playing NFS Shift yesterday maxed out (only 2x AA and 4x AF though) at her laptops native res of 1366x768 with zero stuttering, and great frames. I didn't use FRAPS, but I was a good ways above 30 FPS, probably in the 45-70 FPS area. Kickass for a sub-$500 laptop if you ask me...I want one now too lol
Thanks!
smile.gif


Yup they're some little beasts!
thumb.gif
And as I'm experiencing, they are bandwidth starved at higher resolutions (I'm at 1920x1080 mind you). As you start kicking up the memory speeds and lowering timings, the framerates start to go up and stuttering slowly disappears; but it shouldn't stutter a lot, if any, at your resolution
thumb.gif

Quote:
Originally Posted by jason387 View Post

How different would be the performance between the A6-5300 and the A6-5400K at stock settings in overall performance. My gf will be updating her rig and shes is a very casual gamer and it would be at a low resolution, maybe 1366x768 at the max.
Not much, take a 5-7%. For the price difference I'd say grab the A8, it has a more powerful IGP, but the biggest difference is the dual module design, it really boosts gaming performance by quite a lot.

Stay tuned, I'm halfway done
tongue.gif
 
#7 ·
Quote:
Originally Posted by jason387 View Post

Nice review
smile.gif
. I think the A8 cpus are almost double the price of the A6-5300.
Lol yes, they're 30 dollars more. But I can tell you from here that the performance increase is well worth it.

Now, if you want to wait for Kaveri and meanwhile hold with a low end A6, that's another story
smile.gif


P.S: Overclocked benchmarks updated and finished!
thumb.gif
 
#10 ·
Wouldn't there be a noticeable performance boost if you used a 7200 HDD with 32mb cache and maybe a separate HDD for games?

Lowering resolution would help too.. Just sayin' for more thorough benches.
That's what I'd do anyhoo. I wouldn't be expecting miracles from the stand alone APUs GPU at 1080p.

Well done though!
thumb.gif
 
#11 ·
Quote:
Originally Posted by Hot Wirez View Post

Wouldn't there be a noticeable performance boost if you used a 7200 HDD with 32mb cache and maybe a separate HDD for games?
Not at all. The system has got enough RAM and VRAM to load everything onto it. If anything, loading times would be lower. Mind you, this drive has got better access times than my Seagate 7200.12 and my Spinpoint F3.
Quote:
Lowering resolution would help too.. Just sayin' for more thorough benches
It would, but it's not the point of this review. As most people are using these as HTPCs, and they sit hooked to a FHD HDTV, and 1920x1080 is also the standard resolution for 21" and over monitors
smile.gif


Also, I wanted to test the GPU at a point where it would become bandwidth starved, to actually see how could the GPU cope with it, and how would memory overclock impact performance
biggrin.gif


Plus, it's my monitor's native resolution and I hate playing at sub-native!
Quote:
Well done though!
thumb.gif
Thank you very much
smile.gif
 
#13 ·
Quote:
Originally Posted by RedSunRises View Post

Great thread!! And great testing! APUs are becoming very capable...
thumb.gif
Oh yes, yes they are!! Thank you good Sir
smile.gif


I've completely halted Cat 13.1 WHQL testing, as it is exactly the same, and in some cases even worse, than catalyst 12.11b.

You bet I will be the 1st person online to test the 13.2b ones on APUs though!!
biggrin.gif
 
#19 ·
Quote:
Originally Posted by M3T4LM4N222 View Post

I have an A10-5800K + 6670 running in AMD Dual Graphics. I'd be willing to run any free benchmarks or benchmarks of games that are on my steam account (P1LLZB3RRYD0UGHB0Y) if anyone is interested. I am running the 13.1 drivers.

My youtube channel www.youtube.com/techismycologne has videos of AMD Dual Graphics performance as well.
I would be interested. I just want to see if Dual-Graphics actually has any benefit. Please compare 5700 alone w/ 5700 + 6670
smile.gif


I'll rep you if you do
biggrin.gif
 
#20 ·
Quote:
Originally Posted by eBombzor View Post

Beautiful thread. Could you do Dual-GPU benchmarks w/ 6450, 6570, or 6670? I want to see if anything actually improves, b/c there are no reliable benchmarks online.
Thanks!

Unfortunately I cannot, as I do not own any of those cards (and the PSU isn't able to power them either
redface.gif
)
Quote:
Originally Posted by M3T4LM4N222 View Post

I have an A10-5800K + 6670 running in AMD Dual Graphics. I'd be willing to run any free benchmarks or benchmarks of games that are on my steam account (P1LLZB3RRYD0UGHB0Y) if anyone is interested. I am running the 13.1 drivers.

My youtube channel www.youtube.com/techismycologne has videos of AMD Dual Graphics performance as well.
Please Metalman, do one review with Dual Graphics, I'm also very interested on it!!
wheee.gif

Quote:
Originally Posted by computerparts View Post

Very nice thread thanks for taking the time to do this. Have you tried setting the cpu affinity to one core on Skyrim?
Thanks Sir!

Yes, I did try manual core affinity, and it did help In fact, it got rid of about 75% of the stuttering, if memory serves. Once Catalyst 13.2b (the supposed stuttering reducers) are out, I'll redo Skyrim with all three releases, with manual core affinity to see if it still helps, or they have finally solved that matter
thumb.gif


Edit: The benches above do reflect manual core affinity results btw
biggrin.gif
 
#24 ·
I hope you know that your crippling the CPU by disabling the turbo mode.

Trinity needs high clocks to performance moderately well in single thread, thus the turbo. Turning it off games like skyrim and other games that do not take advantage of multiple cores, you are killing your performance.

Also with trinity you want maximum gb/s on the memory, not the tightest timings.
 
#25 ·
Quote:
Originally Posted by M3T4LM4N222 View Post

Quote:
Originally Posted by akromatic View Post

this needs to be stickied .

btw can you re run the tests at max playable conditions? ie if a game goes over 60fps, increase textures etc and if the game drops below 35fps then drop the textures etc for 1080p and 720p
I can try, but what games, what benchmarks? I need suggestions
biggrin.gif
The mainstream games that everyone plays. All the benchmarks online are on Shogun, StarCraft, Dawn of War, and some other titles that almost no one plays.

Examples of mainstream would be, TF2, Blacklight Retribution, BF3, Skyrim, Far Cry 3, etc.

Comparisons between the 5700 and the 5700 + 6670 is what I'm looking for.
 
#26 ·
Quote:
Originally Posted by akromatic View Post

this needs to be stickied .

btw can you re run the tests at max playable conditions? ie if a game goes over 60fps, increase textures etc and if the game drops below 35fps then drop the textures etc for 1080p and 720p
Sure thing! In fact, I was shooting for the best visuals with playble settings in most games (of course, playability is subjective). But I guess I can always try to tweak a bit more
thumb.gif


Quote:
Originally Posted by ebduncan View Post

I hope you know that your crippling the CPU by disabling the turbo mode.

Trinity needs high clocks to performance moderately well in single thread, thus the turbo. Turning it off games like skyrim and other games that do not take advantage of multiple cores, you are killing your performance.

Also with trinity you want maximum gb/s on the memory, not the tightest timings.
1) I know. But if I leave the Turbo on, my PicoPSU wants to murder me. And testing on 3DMark Vantage showed so little difference, I decided to leave it off.

2) I know, too. That's why I run the highest memory speeds the chip allows me and timings as tight as I can. Both give bandwidth actually, and the iGPU loves having little latency. It cuts on stuttering.

Quote:
Originally Posted by eBombzor View Post

The mainstream games that everyone plays. All the benchmarks online are on Shogun, StarCraft, Dawn of War, and some other titles that almost no one plays.

Examples of mainstream would be, TF2, Blacklight Retribution, BF3, Skyrim, Far Cry 3, etc.

Comparisons between the 5700 and the 5700 + 6670 is what I'm looking for.
I can't believe I forgot about TF2... XD

Once Catalyst 13.2b is out I'll add Far Cry 3 and TF2 to the games list, both with cat 13.1 (12.11b WHQLd), and cat 13.2b. Promise
thumb.gif