A Reddit managed to his hand on the preview build of Windows 10 with DirectX12. The user tested DirectX12 with his Geforce GTX 670 and Intel i7-2600k which managed to give astonishing results. He claims that his test gave him a 400% boost in draw call throughput.
In the image below, his s single thread results On DirectX11 were 1,515, 965 draw calls whereas on a multi-thread it was 2,532, 181 draw calls but when he switched to DirectX12 the number of draw calls increased to 8,562,158 which is more than 330% increase in performance.
"That's the kicker part about this bench. There's no actual point score. All it's doing is increasing the number of draw calls by increasing scene complexity. It just keeps going until the framerate drops to 30, then notes the calls/sec and bails. Since it's only issuing calls for primatives (apparently anyways) it's actually giving you a solid idea of how raw output is limited by the number of draw calls that can be dispatched."
Damn if this is true the 300% is big, really big lol. The performance jump is too big and I really cant wait. But I always think how developers will adopt with DX12 and will they be able to use it correctly. It should be easier, but I wonder how that new OpenGL api is going to do.
Damn if this is true the 300% is big, really big lol. The performance jump is too big and I really cant wait. But I always think how developers will adopt with DX12 and will they be able to use it correctly. It should be easier, but I wonder how that new OpenGL api is going to do.
Draw calls is really all about CPU performance. It is not about GPU performance. This has been known since day one.
Here are the two primary reasons the jump is big: lower CPU overhead. Each draw call takes less CPU cycles. Better multithreading. Draw calls can now be spread out over multiple cores more efficiently instead of being limited to one or two cores.
It is NOT going to translate into 300% better FPS, unless the scene is trying to call for that many draw calls (occasionally happens with RTS games with massive numbers of units on screen). It does, however, translate into being able to have more things on screen at once.
Really, this "news" isn't news. It's something that has been known about DX12, Mantle, and Vulkan for a long time now.
Damn if this is true the 300% is big, really big lol. The performance jump is too big and I really cant wait. But I always think how developers will adopt with DX12 and will they be able to use it correctly. It should be easier, but I wonder how that new OpenGL api is going to do.
Not to wander to far off topic. What games are slated for DX12 this year or at least in the near future? I understand that DX12 should bring some serious improvements all round, but if there aren't any games slated for DX12 for the foreseeable future, what do potential improvements for older hardware mean? Even my R290 is considered old at this point, as are the 9000 series and 600-700 series Nvidias.
Exactly what i was going to say, none of this will provide the gains they think gamers will be expecting. Everything is geared toward purchasing new hardware and sustaining older stuff enough for the already made hardware still in warehouses to sell out before its rendered useless.
acutegaming is just pulling random reddit feeds as if it is news worthy, There is a few that have shown the gains on older hardware already right here on ocn.
Hell, even the on-board igp on the 2500k gets a 200%+ boost from 350k(ish) to just over 1mil
It can translate into real world and will given developer time and resources.
A pristine example is playing something like DayZ Standalone and Arma2 (DX9, the prime offender) at maxed settings with maxed shadows and walking around one of the cities at around 20fps; that's all draw call. Porting the map to A3 w/ DX11 alleviates the situation and will net you around 10 more frames per second which ties in pretty tightly with the 25-30% DX9 750,000 draw limit to the 1,100,000ish DX11 draw increase.
Draw call translates to everything on the screen so as you increase shadows, objects, lighting ect. you can quickly hit the wall with something like DX9 - increasing draw call allows there to be many more things in the scene at once, LOD distances to be increased, general draw distance itself, lights, shadows ect.
As for real-world, it's real but as soon as the draw call is unleashed your once again pushing upon the GPU wall. That's where real-world will only initially translate into 1.5-2x, sometimes more and sometimes flat in certain circumstances/GPU limit, or until we have more GPU power.
TLDR; it's only as real-world as developers want to add stuff, LOD, draw distance and given there is no GPU limit.
Draw calls are massive to the performance of a game/engine. When people make games they have to worry about draw calls all the time! I'm serious about this. When you alleviate that and allow more draw calls to occur FPS may not improve but the visual will. You can have more object faces actively being rendered. In other words CGI like graphics. Resolution will have a lessening effect on the graphics performance. The impact of a CPU on GPU will be greatly lessened as well.
As far as FPS is concerned it'll improve the base frame rate to roughly what the averaging frame rate would be if a CPU wasn't bottle-necking it. In most eyes that's pretty damn huge. Like XBone possibly getting more 1080p games huge, but 1080p isn't a huge jump FYI.
Damn if this is true the 300% is big, really big lol. The performance jump is too big and I really cant wait. But I always think how developers will adopt with DX12 and will they be able to use it correctly. It should be easier, but I wonder how that new OpenGL api is going to do.
Sure these calls are limiting in certain ways on DX11 and older, hence AMD made Mantle and has similar low level API on consoles since consoles nowadays are practically a low end PC with AMD GPU.
Draw calls can be very important. IIRC it's something like one draw call per object, another per texture (as in a flat image), another per material/shader (something the defines how light affects the surface) per light sources (as in # draw calls = materials x lighting sources), etc. It can add up very quickly, especially when lots of lights and unique objects are involved.
Draw calls can be very important. IIRC it's something like one draw call per object, another per texture (as in a flat image), another per material/shader (something the defines how light affects the surface) per light sources (as in # draw calls = materials x lighting sources), etc. It can add up very quickly, especially when lots of lights and unique objects are involved.
Yes, but only for things that are on screen. In a first person or single player RPG type game, you typically don't get too many objects on screen due to FOV, though there are some notable exceptions. But exceptions they are. You are much more likely to run into massive numbers of objects in RTS and MMORPGs.
I'm surprised to see that much improvement on a GTX670, doesn't Kepler only support the lowest tier of DX12 features. I would expect some good boosts on older AMD hardware tho.
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Related Threads
?
?
?
?
?
Ask a question
Ask a question
Overclock.net
27.8M posts
541.2K members
Since 2004
A forum community dedicated to overclocking enthusiasts and testing the limits of computing. Come join the discussion about computing, builds, collections, displays, models, styles, scales, specifications, reviews, accessories, classifieds, and more!