Hello!
I stumbled across Benchmark3D's post about frametime measuring and if framerate really reflects playability, and decided to give that testing method a go, and use it as an excuse to review my A10-5700's gaming performance.
So, let me elaborate:
The purpose of this thread is to analyze if AMD's new generation of Trinity APUs are really usable to play games. And I don't mean if they are capable of producing playable framerates, but if they can kick playable frametimes.
Because it's absolutely worthless to get 60 FPS if you get one every 10 frames over 500ms, ans you will get an unplayable stuttering.
Why do I do this? Well, because nobody has yet tested AMD's APUs from the frametime standpoint, something I consider very important since they are extremely bandwidth bound, as they use the system memory as a framebuffer.
Without further delays, this is my testing setup:
And these are the parameters each component is set at:
As a clarifying note, some people have asked why have I disabled Turbo core. Well these are the reasons.
-Increases CPU power consumption around 20W more that my custom settings under heavy stress.
-Makes the system shut down after a while (PSU overheating).
-Only improves performance around a 5%.
I have also tried to overclock the processor to 3.5GHz across all cores, and it increases power consumption by a 10% (from 60 to 67W) in 3DMark Vantage, all of this while marking a 1% performance improvement (from P5815 to P5860), something I could achieve by increasing the GPU clocks 10MHz. Absolutely not worth it, and even less on a power constrained platform such as mine.
Now to the actual tests!
And there we go, test with Catalyst 13.2 Beta 4!
As to why have I switched to graph view, instead of bar view like before? Well because it lets you see better the actual frametimes. If you look closely in the bar graphs, it's also visible, but it's clearer in the line.
In conclusion, you can see that if you're not too picky about framerates (I know there's people who cannot stand less than 60FPS no matter what), Trinity plays just fine, with results varying with each game. Some can be straight maxed out, some have to be bottomed down in order to play. But so far no game I've played has been straight out unplayable.
Frametimes are, as you have seen, very variable, something I bet has to do with memory bandwidth. As you overclock framerates improve, but frametime differences stay the same, and become worse in some cases, better in others (games that are power starved, and not bandwidth starved).
Overall, I think you cannot ask more out of a platform that costs 250€ complete (board, processor and RAM), and I am pretty surprised that it is actually capable of managing such a performance.
So far, this is it
I will be adding more benchmarks as I fancy, but Metro 2033, Far Cry 3 and Crysis Warhead will get eventually done. I have to rip my DVDs to ISOs and transfer them to the Trinity, as I've got no ODD.
I take requests, but please note that I do not own most games out there, so there is little I can do in that aspect.
On the next post is the overclocking results!
Thanks for reading folks
I stumbled across Benchmark3D's post about frametime measuring and if framerate really reflects playability, and decided to give that testing method a go, and use it as an excuse to review my A10-5700's gaming performance.
So, let me elaborate:
The purpose of this thread is to analyze if AMD's new generation of Trinity APUs are really usable to play games. And I don't mean if they are capable of producing playable framerates, but if they can kick playable frametimes.
Because it's absolutely worthless to get 60 FPS if you get one every 10 frames over 500ms, ans you will get an unplayable stuttering.

Why do I do this? Well, because nobody has yet tested AMD's APUs from the frametime standpoint, something I consider very important since they are extremely bandwidth bound, as they use the system memory as a framebuffer.
Without further delays, this is my testing setup:
- AMD A10-5700
- Gigabyte F2A75M-HD2
- 2x4GB GSKILL Ares
- Hitachi 5K750 500GB 2.5"
And these are the parameters each component is set at:
- CPU: 3.4GHz @1.12V, Turbo and APM disabled. C1, C6 and CnQ enabled. Why turbo disabled? Because I want to
- GPU: 760MHz.
- Memory settings: DDR3-1866, 8-10-9-27-36 2T 1.58V. Can go tighter, but doesn't make much of a difference (since it's 8-9-8).
- Monitor: 1920x1080, 60Hz.
As a clarifying note, some people have asked why have I disabled Turbo core. Well these are the reasons.
-Increases CPU power consumption around 20W more that my custom settings under heavy stress.
-Makes the system shut down after a while (PSU overheating).
-Only improves performance around a 5%.
I have also tried to overclock the processor to 3.5GHz across all cores, and it increases power consumption by a 10% (from 60 to 67W) in 3DMark Vantage, all of this while marking a 1% performance improvement (from P5815 to P5860), something I could achieve by increasing the GPU clocks 10MHz. Absolutely not worth it, and even less on a power constrained platform such as mine.
Now to the actual tests!

DiRT2:
Testing here is done in two situations that make different usage of the GPU.
1st scenario is in Morocco, on a Rally event. Second one is on Japan, in a Rallycross event.
Why this distinction? Because Morocco makes extensive use of vegetation in the ground, and there are lots of cast shadows by buildings. And Japan, because it's a urban RX track with seven other racers, and lots of changing surfaces that make extensive use of physics calculations.
Everything set to high except postprocessing, which is at Medium. No AA, 8X anisotropic filtering.
Morocco:

Japan:

The Elder Scrolls IV: Skryim:
Testing is done inside Ilinalta's Deep (a necromancer cave), because it features lots of water and lots of particles. I also played my mage, so I'm making extensive use of fire. Testing here is done setting manual core affinity to the 1st thread from the second module, as it is proven it gives the smoothest experience.
Settings as follows:


Just Cause 2:
Using the built-in benchmark (Concrete Jungle), nothing too fancy.
Settings as follows:


As you can see it has super high stuttering; and after trying to get adequate framerates, I concluded that even setting everything to Low does not help, framerates stay the same. There is definitely something strange with this game. Note that I can play just fine, for some reason.
GTAIV: Episodes from Liberty City:
Again, the bundled benchmark tool from The Lost and Damned.
Settings as follows:


Mafia 2:
And once more, the benchmarking tool.
These are the settings:


The peaks from the 1st third of the test correspond to mini freezes, that for some reason happen in the benchmark but I've never experienced ingame.
Guild Wars 2:
Taking a stroll around Plains of Ashford, killing some enemies here and there.
This is how it's set up:


Despite the very apparent stuttering, the game was playing very smooth. Seems like frametimes, while having lots of variations, never take long enough to notice a stutter.
Planetside 2:
Simulating a field fight, lots of vehicles involved, some infantry, taking a couple of scoped shots. Playing as a NC Infiltrator, CQ sniping.
Render quality at 100%, everything else at flat out Low. Either way it is unplayable, thanks to stupid SOE who decided that forcing Antialiasing was a good idea.

Sniper Elite V2:
The benchmark tool, too.
Configured like this:


Team Fortress 2:
Everything at flat out maximum, including Antialiasing and anisotropic filtering.

It simply tears through this game like a hot knife through butter. That one spike corresponds to a death
Far Cry 3:
Settings bottomed down. Nothing else. Ran at DX11 and DX9, for comparison:
DirectX 9:

DirectX 11:

Performance is less than stellar (albeit arguably playable depending on your style), and ironically, it only gets worse with DX9. The game also looks worse, so there's no reason as to why someone would want to use the DX9 mode.
Chivalry: Medieval Warfare
Testing was done against 7 bots in a TDM match in Throneroom, as I find it's a rather nice scenario. Framerates in Arena will be higher, lower in some huge Team Objective maps.
Settings as follows:


There is A LOT of stuttering, but overall the game is pretty playable. You do notice the stutter though.
And here it is with MUCH higher settings, needless to say it's night and day in looks.


Borderline playable, but still quite there. Logically, the stuttering is also quite higher.
3DMark Vantage:
Nothing too fancy, Performance preset, bare bones:

Not too bad! No frametime tests here, as I did this test for rough score, and FRAPS lowers it.
Testing here is done in two situations that make different usage of the GPU.
1st scenario is in Morocco, on a Rally event. Second one is on Japan, in a Rallycross event.
Why this distinction? Because Morocco makes extensive use of vegetation in the ground, and there are lots of cast shadows by buildings. And Japan, because it's a urban RX track with seven other racers, and lots of changing surfaces that make extensive use of physics calculations.
Everything set to high except postprocessing, which is at Medium. No AA, 8X anisotropic filtering.
Morocco:
Japan:
The Elder Scrolls IV: Skryim:
Testing is done inside Ilinalta's Deep (a necromancer cave), because it features lots of water and lots of particles. I also played my mage, so I'm making extensive use of fire. Testing here is done setting manual core affinity to the 1st thread from the second module, as it is proven it gives the smoothest experience.
Settings as follows:
Just Cause 2:
Using the built-in benchmark (Concrete Jungle), nothing too fancy.
Settings as follows:
As you can see it has super high stuttering; and after trying to get adequate framerates, I concluded that even setting everything to Low does not help, framerates stay the same. There is definitely something strange with this game. Note that I can play just fine, for some reason.
GTAIV: Episodes from Liberty City:
Again, the bundled benchmark tool from The Lost and Damned.
Settings as follows:
Mafia 2:
And once more, the benchmarking tool.
These are the settings:
The peaks from the 1st third of the test correspond to mini freezes, that for some reason happen in the benchmark but I've never experienced ingame.
Guild Wars 2:
Taking a stroll around Plains of Ashford, killing some enemies here and there.
This is how it's set up:
Despite the very apparent stuttering, the game was playing very smooth. Seems like frametimes, while having lots of variations, never take long enough to notice a stutter.
Planetside 2:
Simulating a field fight, lots of vehicles involved, some infantry, taking a couple of scoped shots. Playing as a NC Infiltrator, CQ sniping.
Render quality at 100%, everything else at flat out Low. Either way it is unplayable, thanks to stupid SOE who decided that forcing Antialiasing was a good idea.
Sniper Elite V2:
The benchmark tool, too.
Configured like this:
Team Fortress 2:
Everything at flat out maximum, including Antialiasing and anisotropic filtering.
It simply tears through this game like a hot knife through butter. That one spike corresponds to a death

Far Cry 3:
Settings bottomed down. Nothing else. Ran at DX11 and DX9, for comparison:
DirectX 9:
DirectX 11:
Performance is less than stellar (albeit arguably playable depending on your style), and ironically, it only gets worse with DX9. The game also looks worse, so there's no reason as to why someone would want to use the DX9 mode.
Chivalry: Medieval Warfare
Testing was done against 7 bots in a TDM match in Throneroom, as I find it's a rather nice scenario. Framerates in Arena will be higher, lower in some huge Team Objective maps.
Settings as follows:
There is A LOT of stuttering, but overall the game is pretty playable. You do notice the stutter though.
And here it is with MUCH higher settings, needless to say it's night and day in looks.
Borderline playable, but still quite there. Logically, the stuttering is also quite higher.
3DMark Vantage:
Nothing too fancy, Performance preset, bare bones:
Not too bad! No frametime tests here, as I did this test for rough score, and FRAPS lowers it.
And there we go, test with Catalyst 13.2 Beta 4!

As to why have I switched to graph view, instead of bar view like before? Well because it lets you see better the actual frametimes. If you look closely in the bar graphs, it's also visible, but it's clearer in the line.
DiRT2:
Morocco:

Japan:

DiRT2 once again shows its love for the APU. Framerates stay the same too. Exact same stuttering, aka none.
The Elder Scrolls IV: Skryim:

Same performance. Nothing to see here
Just Cause 2:

Still dog poo. Although there is a welcome 7% framerate improvement. Translates in a 18.6 to 19.9 increase. Not a lot, but welcome. Also spikes like a fence.
GTAIV: Episodes from Liberty City:
Mafia 2:
Guild Wars 2:
Planetside 2:
Sniper Elite V2:

Framerates go slightly down, stuttering goes slightly up. Something isn't playing nicely here.
Morocco:
Japan:
DiRT2 once again shows its love for the APU. Framerates stay the same too. Exact same stuttering, aka none.
The Elder Scrolls IV: Skryim:
Same performance. Nothing to see here

Just Cause 2:
Still dog poo. Although there is a welcome 7% framerate improvement. Translates in a 18.6 to 19.9 increase. Not a lot, but welcome. Also spikes like a fence.
GTAIV: Episodes from Liberty City:
Mafia 2:
Guild Wars 2:
Planetside 2:
Sniper Elite V2:
Framerates go slightly down, stuttering goes slightly up. Something isn't playing nicely here.
In conclusion, you can see that if you're not too picky about framerates (I know there's people who cannot stand less than 60FPS no matter what), Trinity plays just fine, with results varying with each game. Some can be straight maxed out, some have to be bottomed down in order to play. But so far no game I've played has been straight out unplayable.
Frametimes are, as you have seen, very variable, something I bet has to do with memory bandwidth. As you overclock framerates improve, but frametime differences stay the same, and become worse in some cases, better in others (games that are power starved, and not bandwidth starved).
Overall, I think you cannot ask more out of a platform that costs 250€ complete (board, processor and RAM), and I am pretty surprised that it is actually capable of managing such a performance.
So far, this is it

I take requests, but please note that I do not own most games out there, so there is little I can do in that aspect.
On the next post is the overclocking results!
Thanks for reading folks
