Overclock.net banner

The Intel A380 thread

890 5
So about a month ago I picked up an Asrock Challenger A380 from Newegg. -> ASRock Challenger Arc A380 Video Card A380 CLI 6G OC - Newegg.com I've been playing around with it and messed around with a bunch of games. Old and new.

And long story short it is a graphics card that 9/10 of the games I've tried were quite playable at 1080p or better. And that the drivers are rough in many areas.

But most of the driver problems can be dealt with so a place for tips on how to do that would be handy for somebody getting one of these. Mostly it was just a graphics card and acted like a comparably powered one from AMD or Nvidia. Driver deficiencies did appear in several games and mostly just slowed performance relative to what the other two would get, but usually a poor performing old game would still give you many more frames at the same settings and resolution as a newer game.

I played and recorded these games with my A380 with driver 31.0.101.3277, W11 and a lot of the problems may go away with newer drivers, but a lot will probably hang around a while.



Since this is overclock.net the first thing I'll bring up is overclocking.

Arc Control is the only thing that currently works and it gives the Asrock A380 4 sliders to play with.
1. Temp limit which sets the temp throttle temp.
2. Power limit which sets the power throttle limit.
3. GPU Voltage Offset which offsets your gpu clock : as per the predefined clock/volt curve. You change the upper limit volts and the clocks follow the volt on the curve with their new upper limit to a new upper limit clock. And the GPU voltage offset is only in the positive direction.
4. GPU Performance Boost which will increase your clocks similarly to raising the volts, except the volts only go up a little. This is the setting that will give you the most instability and crashes if you increase it too much.

That is all so far for my A380. Hopefully more will come.

Here's an official demonstration:

I walked through the 2 functional sliders, screenshotting along the way, while running SOTTR like the below image (the highest clocks I got during that, and no it isn't stable :p ) and put the data into the spreadsheet below that.
Plant community Plant Ecoregion Natural landscape Terrestrial plant

Rectangle Slope Font Parallel Pattern

As you can see I didn't get a ton of fps in that game from OC, but some scaled better.
A GPU Performance Boost above 25 got unstable for my card (which is about the equivalent of a 55-60mv undervolt). Also above 2700mhz was getting unstable. My guess is the volt/freq curve needs more than a linear increase in volts at higher frequencies. And any volt/boost combo that showed 66w was at the power throttle limit of my card and would drop volt/freq at a rate depending on how much over the limit I was asking for. And finally the core volt/frequency curve is lightly dependent on temps. The whole thing shifts towards more volts at higher temps like other silicon.

A driver deficiency I found here is that the Arc Control performance tuning didn't always apply my oc decreases and I would make it by pushing the "reset to defaults" button on that screen to start over to get around this. That always worked, but quietly holding on to an unstable oc setting is a flaw that could frustrate some. Arc Control always applied my oc increases.

If I found instability in a game it was almost always my performance boost number being too high. And since my card would usually crash out of the game, and even crash the system if the card was undervolted real far, I gradually decreased this setting to 20 or so. Some game/setting combos wanted it even lower for stability.



OK on to gaming.

I have a bunch of games from over the years. A mix of games I wanted and some I got because they were on sale. I like mostly RPG/action single player games and like to play them at max graphical settings at about 60 fps so that is pretty much what you will see. I tossed in most of the ones I tried that did work and all but two that didn't work. The two I didn't include were FO3 which didn't work because GFWL and Sacred 2 which also doesn't work with my 3080. I'll put these vids and notes under separate spoilers because they are long. I encourage skipping to whatever relevant section you want because the videos are long and boring, and my gameplay can be pretty bad with ones I never really played or forgot how to.
Also recording the games dropped my framerates by single digits to maybe 10% depending on the game. Probably power limit? But could be a resource thing as well.
Games in this one, in order:
Abzu - Some screen resize issues when game res different than desktop.
Alan Wake -Black screen exiting game, also doesn't run 60 fps with e-cores enabled, I think driver cpu utilization issues. Better without e-cores, but still underperforming.
Alien Isolation
AC Odyssey
Astebreed
Bioshock
Bioshock 2 remastered
Bioshock Infinite
Blood Knights - screen res changing issues
Borderlands the Pre-Sequel
Borderlands 3
Code Vein
Control
CP 2077
Games in this one, in order:
Deadfall Adventures
Deus Ex Mankind Divided
Dirt 3
Dishonored
Dishonored 2 -Relatively bad performance compared to post Kepler NV, AMD
Distance
The Division 2
Dragons Dogma Dark Arisen
Dragon Age Inquisition
Elder Scrolls 4 - A lot of AA really drops performance, and crashes above 60 fps.
Elder Scrolls 5 special edition
Elex -Relatively bad performance compared to NV, AMD
Fallout 4
Fe
Freedom Finger
Games in this one, in order:
Goat Simulator
God Eater 3
GreedFall
HZD
Hellblade: Senua's Sacrifice
Kingdoms of Amalur: Reckoning
Mad Max -AO makes weird banding, can be fixed if turned off.
Mass Effect 2
Mass Effect Andromeda
Metal Gear Solid V: the Phantom Pain
Metro Last Light
Metro Exodus -Power hungry for the volts, might be an RT thing.
Middle Earth Shadow of Mordor
NieR Automata
A Plague Tale: Innocence
Prey
Games in this one, in order:
Quantum Break
Risen -Gives about same fps as W3. My Intel IGPU would probably run it faster. Still 60 fps though.
Sacred 3
Serious Sam 3
The Sinking City
Tales of Berseria -very CPU limited. Doesn't do 60 with e-cores enabled.
Tales of Zestria -very CPU limited. Doesn't do 60 with e-cores enabled. Sometimes doesn't with just p-cores, but that might have something to do with special K added to go above stock 30 fps game limit.
The Technomancer -Underperforms relative to NV, AMD, still over 60.
Titanfall 2
Tomb Raider -Framerate drops with that burning house area, otherwise not too bad.
Rise of the Tomb Raider
Shadow of the Tomb Raider
Warhammer 40000: Space Marine
The Witcher 3
Wolfenstein The New Order
X-Blades
These are games that just don't play well enough.
In order:
Agony-just crashes
Bayonetta-crashes
Cloudpunk-shopkeeper shop doesn't show so you just buy whatever
Deus Ex: HR-framerate totally crashes with fire and smoke and no way in settings to fix it.
The Witcher-infinity plant bug
The Witcher2- grass shadows get on Geralt's face, also very bad performance.
Also my sound didn't always record unless I set the recording audio bitrate to max. And my AV1 files are saved as MP4 suffix even though they are AV1, but that may be a windows thing.
I used Windows Video Editor to trim my videos, put the game name in and join them together.
I also tossed in a lot of performance telemetry so people could better see what was going on.
And I went with just P-cores, no HT after running into CPU bottlenecks in Alan Wake and the two Tales games I tried. I figured I would just eliminate that issue as much as reasonably possible. There might be some other games I ran that are cpu limited with arc, but they didn't pop out to me.
Arc is currently significantly more prone to CPU limitations than either Ampere or RDNA2. And seems best with 60 fps at max attainable graphics settings.
1 - 6 of 6 Posts

·
Not a linux lobbyist
Joined
·
2,925 Posts
Discussion Starter · #3 ·
They will probably make one.
They are going to be sold. And will have more glitches in the beginning for sure and tips to deal with those or rants against the cards are good to be available for those wanting to know.

I don't know how many people really have these yet though. There haven't been many specifics put out yet on older games or overclocking or general use on youtube or places like this. I didn't jump all over getting this A380, I think I waited a day since I wanted the A770 and the newegg purchase went like anything else so it seems like they had at least some reasonable number of them.
I guess the official reviews for the larger cards are on the way so more details are coming out.
 

·
Registered
Joined
·
4,199 Posts
Thanks for the tests!

I'm pretty much only interested in the 16GB A770, for oneAPI, once Intel sort their drivers out a bit more. A 16GB GPU for <$400? Yes please.
 

·
Not a linux lobbyist
Joined
·
2,925 Posts
Discussion Starter · #5 ·
So that 3DMark XESS test came out so I ran it on my A380. I ran it 1440p, performance. It looks like the Port Royal test on a different trac and gets about the same fps (without XESS) when you run it at the default 1440p.
Brown Rectangle Font Screenshot Software

Product Font Screenshot Software Recipe

It looks pretty much the same at a casual glance, just 3x as fast. I wish XESS worked this well somewhere else.
I also ran the rest of the feature tests, not including DLSS as long as I was at it.
Font Screenshot Rectangle Parallel Terrestrial plant

And the reason my A380 is with a lowly g6900 is that I put my 3080 back with my main pc and this itx g6900 box is the future home of my 12700k when I get a 13900k. The g6900 was $40 with a cooler and all of the platform stuff works so I just got it to set stuff up ahead of time and see how bad 2 alder threads are at 3.4ghz. They're a little slower than an i3 4130 in real workloads if anyone is curious. Those extra 2 threads in the i3 help a lot, even if they are slower.
 
  • Rep+
Reactions: Paradigm Shifter

·
Not a linux lobbyist
Joined
·
2,925 Posts
Discussion Starter · #6 ·
The A770 and A750 should be released tomorrow so it seems a good time for me to post my thoughts on Arc Alchemist's performance in games.

1. The drivers still have bugs. Nothing like mgpu, but certainly more than AMD. Probably won't be a big deal if you are expecting some, like a Bethesda game.
2. The drivers seem to have poor scheduling and waste more time per frame than Nvidia and certainly AMD. My napkin math indicates that it looks like Arc takes 50% more time than AMD to do non rendering tasks, whatever they are, and this is the cause for what looks like relative strength at higher resolutions which just mask the +~2.37ms per frame average added with their longer frametimes. This will likely improve, but IDK how much. Also, not including that, AMD seems to get about 30% more performance per tflop on average, probably from a mix of better scheduling and time saved by often using infinity cache instead of vram.

I'm not saying don't buy one, just be aware of the likelihood of worse low res, high refresh performance and shaders that seem like they could do a bit better. I still think that there will be significant improvements and they will relatively outperform what today's reviews show on new games as they come out.

Here's the napkin math I did to arrive at the wasted time per frame, under a spoiler because we've all seen what it looks like when some guy fills a page with that stuff.

I took my data from Techpowerup's review of the A770/A750: Intel Arc A770 Review - Finally a Third Competitor - Average FPS | TechPowerUp so the averages used are of the games they tested.

I compared the A770 to the RX 6700XT because it looked like they were closest if the frametime got sufficiently large and I wanted to isolate the wasted time. I also compared Arc to Radeon because Nvidia has complications going on with separate GPU sections, rearranged multithreading, etc and Radeon seemed more similar to Arc. Radeon does have that hardware scheduler and I'm sure that contributes, but that would just be part of the Arc's wasted time number since Nvidia shows drivers can make up some of that with software.

I also assumed equal GPU work per pixel, (2560*1440)/(1920*1080)=1.778 and 4k = 4*1080p.

Techpowerup's average framerates(frametimes) for the A770 were 98.8(10.12) for 1080p, 79.4(12.59) for 1440p, 48.8(20.49) for 4k.
And for the 6700XT were 130.8(7.65) for 1080p, 99.6(10.04) for 1440p, and 55.7(17.95) for 4k.

If you assume equal work per pixel you can separate out a static value for GPU work and CPU(and wasted time)

For A770:
G+C=10.12 at 1080p
1.778G+C=12.59 at 1440p
.778G=2.47, G=3.17ms and C=10.12-3.17=6.95ms
As a check, 4G+C=19.63ms which is close to A770 4k of 20.49ms

For 6700XT:
G+C=7.65 at 1080p
1.778G+C=10.04 at 1440p
.778G=2.39, G=3.07ms and C=7.65-3.07=4.58ms
And as a check 4G+C=16.86ms which is close to 6700XT 4k of 17.95ms

With an average estimated graphics portion of the frametime of 3.07ms the 6700XT is still faster than the A770s 3.17ms, but not by much.
But it walks away with the 4.58ms spent doing other stuff compared to Intel's 6.95ms.
These are just averages, but in a game like CSGO where one might get 400fps (2.5ms), an extra 2.37ms frametime takes that down to 205fps.

And of course the numbers change with the list of games used and my assumption of equal work per pixel may be off, but the general explanation seems to be an easy way to understand Arc's behavior relative to AMD's

I like gaming at 60 fps, am ok with reducing some settings, have a backup GPU and like playing with new hardware that will likely get better over time so I'm like the perfect match for these things.
Somebody who plays high refresh and demands perfection out of the box really isn't. But who am I to judge?
 
  • Rep+
Reactions: Paradigm Shifter
1 - 6 of 6 Posts
Top