Overclock.net banner

[AC]DirectX 12 Gives A Boost of 330% To Old Hardware

9K views 94 replies 56 participants last post by  Wuest3nFuchs 
#1 ·

Quote:
A Reddit managed to his hand on the preview build of Windows 10 with DirectX12. The user tested DirectX12 with his Geforce GTX 670 and Intel i7-2600k which managed to give astonishing results. He claims that his test gave him a 400% boost in draw call throughput.

In the image below, his s single thread results On DirectX11 were 1,515, 965 draw calls whereas on a multi-thread it was 2,532, 181 draw calls but when he switched to DirectX12 the number of draw calls increased to 8,562,158 which is more than 330% increase in performance.

"That's the kicker part about this bench. There's no actual point score. All it's doing is increasing the number of draw calls by increasing scene complexity. It just keeps going until the framerate drops to 30, then notes the calls/sec and bails. Since it's only issuing calls for primatives (apparently anyways) it's actually giving you a solid idea of how raw output is limited by the number of draw calls that can be dispatched."
Source
Reddit Source (main source just grabs what is needed)

Damn if this is true the 300% is big, really big lol. The performance jump is too big and I really cant wait. But I always think how developers will adopt with DX12 and will they be able to use it correctly. It should be easier, but I wonder how that new OpenGL api is going to do.
 
See less See more
1
#5 ·
Quote:
Originally Posted by zealord View Post

Is it just me or do we have quite a lot of duplicate news of old articles lately?
headscratch.gif
Ah that wasnt a duplicate. The one you saw a while back was that dev discussing about DX12 and Xbox one.
 
#7 ·
Quote:
Originally Posted by Cakewalk_S View Post

Well I think I'll have to give this a try on my laptop... Tried to install Windows 10 unsuccessfully so far... Will try again

Hopefully a performance boost for an old i5-560m and gt 420m
What happens when you try and install it?
 
#9 ·
Quote:
Originally Posted by Said Nobody View Post

Quote:
Originally Posted by Cakewalk_S View Post

Well I think I'll have to give this a try on my laptop... Tried to install Windows 10 unsuccessfully so far... Will try again

Hopefully a performance boost for an old i5-560m and gt 420m
What happens when you try and install it?
Not even there yet. Lol working on getting the iso onto the flash drive. Lol I have done this before
 
#10 ·
Quote:
Originally Posted by Said Nobody View Post


Source
Reddit Source (main source just grabs what is needed)

Damn if this is true the 300% is big, really big lol. The performance jump is too big and I really cant wait. But I always think how developers will adopt with DX12 and will they be able to use it correctly. It should be easier, but I wonder how that new OpenGL api is going to do.
Draw calls is really all about CPU performance. It is not about GPU performance. This has been known since day one.

Here are the two primary reasons the jump is big: lower CPU overhead. Each draw call takes less CPU cycles. Better multithreading. Draw calls can now be spread out over multiple cores more efficiently instead of being limited to one or two cores.

It is NOT going to translate into 300% better FPS, unless the scene is trying to call for that many draw calls (occasionally happens with RTS games with massive numbers of units on screen). It does, however, translate into being able to have more things on screen at once.

Really, this "news" isn't news. It's something that has been known about DX12, Mantle, and Vulkan for a long time now.
 
#11 ·
Quote:
Originally Posted by Said Nobody View Post


Source
Reddit Source (main source just grabs what is needed)

Damn if this is true the 300% is big, really big lol. The performance jump is too big and I really cant wait. But I always think how developers will adopt with DX12 and will they be able to use it correctly. It should be easier, but I wonder how that new OpenGL api is going to do.
330% boost in draw calls... how many times do people need to clarify that much of an improvement in draw calls =/= that much FPS.
 
#13 ·
Not to wander to far off topic. What games are slated for DX12 this year or at least in the near future? I understand that DX12 should bring some serious improvements all round, but if there aren't any games slated for DX12 for the foreseeable future, what do potential improvements for older hardware mean? Even my R290 is considered old at this point, as are the 9000 series and 600-700 series Nvidias.
 
#14 ·
Quote:
Originally Posted by xSociety View Post

Please don't expect that big of a boost in real world applications.
Exactly what i was going to say, none of this will provide the gains they think gamers will be expecting. Everything is geared toward purchasing new hardware and sustaining older stuff enough for the already made hardware still in warehouses to sell out before its rendered useless.
 
#15 ·
Quote:
Originally Posted by Said Nobody View Post

Quote:
Originally Posted by zealord View Post

Is it just me or do we have quite a lot of duplicate news of old articles lately?
headscratch.gif
Ah that wasnt a duplicate. The one you saw a while back was that dev discussing about DX12 and Xbox one.
acutegaming is just pulling random reddit feeds as if it is news worthy, There is a few that have shown the gains on older hardware already right here on ocn.
Hell, even the on-board igp on the 2500k gets a 200%+ boost from 350k(ish) to just over 1mil

http://www.overclock.net/t/1548140/pc-per-3dmark-api-overhead-feature-test-early-dx12-performance/0_100#post_23724555
Would be interesting if this could revive the old 2xx series... (It wont)
Quote:
Originally Posted by Phaethon666 View Post

What games are slated for DX12 this year or at least in the near future?
Batman: Arkham Knight and The Witcher 3? wasn't they planned to eventually implement support to some degree?
 
#16 ·
http://www.3dmark.com/aot/15644

I've been playing with that API test for months. It's broken. You won't see those kinds of real world results.

How have news editors and contributors not learned about this after the first 5 bogus articles?
 
  • Rep+
Reactions: Maintenance Bot
#20 ·
It can translate into real world and will given developer time and resources.

A pristine example is playing something like DayZ Standalone and Arma2 (DX9, the prime offender) at maxed settings with maxed shadows and walking around one of the cities at around 20fps; that's all draw call. Porting the map to A3 w/ DX11 alleviates the situation and will net you around 10 more frames per second which ties in pretty tightly with the 25-30% DX9 750,000 draw limit to the 1,100,000ish DX11 draw increase.

Draw call translates to everything on the screen so as you increase shadows, objects, lighting ect. you can quickly hit the wall with something like DX9 - increasing draw call allows there to be many more things in the scene at once, LOD distances to be increased, general draw distance itself, lights, shadows ect.

As for real-world, it's real but as soon as the draw call is unleashed your once again pushing upon the GPU wall. That's where real-world will only initially translate into 1.5-2x, sometimes more and sometimes flat in certain circumstances/GPU limit, or until we have more GPU power.

TLDR; it's only as real-world as developers want to add stuff, LOD, draw distance and given there is no GPU limit.
 
#21 ·
Quote:
Originally Posted by un1b4ll View Post

How does a benefit on draw calls translate into a realworld scenario like UE4 or Frostbite?
Draw calls are massive to the performance of a game/engine. When people make games they have to worry about draw calls all the time! I'm serious about this. When you alleviate that and allow more draw calls to occur FPS may not improve but the visual will. You can have more object faces actively being rendered. In other words CGI like graphics. Resolution will have a lessening effect on the graphics performance. The impact of a CPU on GPU will be greatly lessened as well.

As far as FPS is concerned it'll improve the base frame rate to roughly what the averaging frame rate would be if a CPU wasn't bottle-necking it. In most eyes that's pretty damn huge. Like XBone possibly getting more 1080p games huge, but 1080p isn't a huge jump FYI.
 
#22 ·
Quote:
Originally Posted by Said Nobody View Post


Source
Reddit Source (main source just grabs what is needed)

Damn if this is true the 300% is big, really big lol. The performance jump is too big and I really cant wait. But I always think how developers will adopt with DX12 and will they be able to use it correctly. It should be easier, but I wonder how that new OpenGL api is going to do.
It's about draw calls, not FPS
tongue.gif


Sure these calls are limiting in certain ways on DX11 and older, hence AMD made Mantle and has similar low level API on consoles since consoles nowadays are practically a low end PC with AMD GPU.
 
#23 ·
Draw calls can be very important. IIRC it's something like one draw call per object, another per texture (as in a flat image), another per material/shader (something the defines how light affects the surface) per light sources (as in # draw calls = materials x lighting sources), etc. It can add up very quickly, especially when lots of lights and unique objects are involved.
 
#24 ·
Quote:
Originally Posted by geoxile View Post

Draw calls can be very important. IIRC it's something like one draw call per object, another per texture (as in a flat image), another per material/shader (something the defines how light affects the surface) per light sources (as in # draw calls = materials x lighting sources), etc. It can add up very quickly, especially when lots of lights and unique objects are involved.
Yes, but only for things that are on screen. In a first person or single player RPG type game, you typically don't get too many objects on screen due to FOV, though there are some notable exceptions. But exceptions they are. You are much more likely to run into massive numbers of objects in RTS and MMORPGs.
 
#25 ·
If it's 25% in real life. i'll be so so so happy
 
#26 ·
I'm surprised to see that much improvement on a GTX670, doesn't Kepler only support the lowest tier of DX12 features. I would expect some good boosts on older AMD hardware tho.
 
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top