Overclock.net › Forums › Industry News › Software News › [nvidia's blog] GDC - NVIDIA, AMD, Intel Explain How OpenGL Can Unlock 7-15x Performance Gains
New Posts  All Forums:Forum Nav:

[nvidia's blog] GDC - NVIDIA, AMD, Intel Explain How OpenGL Can Unlock 7-15x Performance Gains - Page 6

post #51 of 98
Quote:
Originally Posted by Alatar View Post

It might have made the announcement come faster but the tech was already being developed way, way before mantle was announced.

Just the forza demo has apparently been in development about as long as mantle has been announced.

They had some build of Forza 5 running on Windows way before yesterday. All of the so-called devkits at E3 2013 were just Windows 8 PC's with Nvidia GPU's in them. I dunno why they showed some "Forza tech demo" to illustrate how easy it is to port games from X1 to PC and vice-versa, when they already had that almost an entire year earlier.
post #52 of 98
Quote:
Originally Posted by Mand12 View Post

Some seem to want to ensure AMD gets all credit whatsoever.

I understand neither want.

True enough however Mantle is right now DX12 is more than a year away
post #53 of 98
Quote:
Originally Posted by Forsakenfire View Post

The fact that all three of these companies are talking about this, is a decent sign that this could actually happen. Interested in seeing how this all develops.
It will develop into Microsoft throwing a bunch of cash at nVidia to erase the letters G and L from their alphabet and replace it with another D and an X.
post #54 of 98
Quote:
Originally Posted by Mrzev View Post

True, just like I said with the TCP, with Voice it would be a huge difference. Another example is ASCI miners. If you wana farm bitcoins, ASCI miners are much more efficient than a graphics card, but I doubt it can run Crysis on it.

As for the Xeon chart... some marketing stuff is involved in that.
http://www.intel.com/content/dam/www/public/us/en/documents/performance-briefs/xeon-phi-product-family-performance-brief.pdf

Notice that that there was http://ark.intel.com/products/75283/Intel-Xeon-Processor-E5-2697-v2-30M-Cache-2_70-GHz in-between that and the Phi. It increased by 50% on that jump going from 32nm to 22nm and from 115 TDP to 130 TDP which is defiantly a nice gain tho. With those Phi's it has 2x TDP with the same 22nm, so im pretty sure they just combined 2 of the E5-2697v2 together. It took them 1.5 years to get that 50% increase, and just 0.5 years to stack 2 of them. Graphics cards do this too i.e. Radeon HD 6990, Radeon HD 7990.
Well yes and the reason they are is because ASIC (not ASCI which is a way of putting symbols into a simple hexadecimal) stands for Application specific integrated circuit. So it is designed to do one specific task actually even todays cpus and gpus are essentially ASIC's but they have various parts for each of those specific tasks there are only FPGA's that can be reprogrammed but they are slow. (everything is relative I'm talking consumer usage)

An ASIC can never do something it wasn't designed for as in an instruction cycle with gpu and cpu however we can use those standard cycles it runs and has optimized blocks for to build on a bigger picture.

The phi is many X87 cores stacked together the focus is being paralel and being an easy port from x86 code. It is Intel's throw at gpu like compute.
post #55 of 98
I'm now thoroughly amused at the notion of an ASCII miner...
post #56 of 98
Quote:
Originally Posted by Mand12 View Post

Some seem to want to ensure AMD gets all credit whatsoever.

I understand neither want.

Yeah, but the thing is there's a lot of people speculating about what came first, and much nonsense, as if they knew the whole story.

from a extremetech article(source):
Quote:
We’ve spoken to several sources with additional information on the topic who have told us that Microsoft’s interest in developing a new API is a recent phenomenon, and that the new DirectX (likely DirectX 12) will substantially duplicate the capabilities of AMD’s Mantle. The two APIs won’t be identical — Microsoft is doing its own implementation — but the end result, for consumers, should be the same: lower CPU overhead and better scaling in modern titles.
Quote:
In fact, as some of you may recall, an AMD executive publicly stated a year ago that there was no “DirectX 12″ on the Microsoft roadmap. Microsoft responded to those comments by affirming that it remained committed to evolving the DirectX standard — and then said nothing more on the topic. Then AMD launched Mantle, with significant support from multiple developers and a bevy of games launching this year — and apparently someone at Microsoft decided to pay attention.

And from the nvidia blog (source):
Quote:
Our work with Microsoft on DirectX 12 began more than four years ago with discussions about reducing resource overhead. For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC.

They said they started talking about it, that doesn't necessarily means they started working on it, in fact they have been working to get a usable implementation the past year, not before. While Mantle has been in complete development 2 years before it was even announced in September 2013. There are two games that costumers can use right now with mantle, while there are none with dx12.

And this is an opengl thread, it would be nice to talk about that.
Edited by Neilthran - 3/22/14 at 6:14am
Tanith
(13 items)
 
  
CPUMotherboardGraphicsRAM
AMD Ryzen 5 1600 @ 3.7Ghz Asus Rog Strix B350-F Gigabyte Aorus Radeon RX 580 8gb Corsair Vengeance Lpx 16gb ddr4 3000Mhz 
Hard DriveCoolingOSMonitor
WD Blue 1TB Stock Cooler Windows 10 64bits Samsung B2030 
KeyboardPowerCaseMouse
Hyperx Alloy Fps Pro XFX XTR 650 Corsair Carbide 400c Logitech G203 
Mouse Pad
Steelseries Qck 
  hide details  
Reply
Tanith
(13 items)
 
  
CPUMotherboardGraphicsRAM
AMD Ryzen 5 1600 @ 3.7Ghz Asus Rog Strix B350-F Gigabyte Aorus Radeon RX 580 8gb Corsair Vengeance Lpx 16gb ddr4 3000Mhz 
Hard DriveCoolingOSMonitor
WD Blue 1TB Stock Cooler Windows 10 64bits Samsung B2030 
KeyboardPowerCaseMouse
Hyperx Alloy Fps Pro XFX XTR 650 Corsair Carbide 400c Logitech G203 
Mouse Pad
Steelseries Qck 
  hide details  
Reply
post #57 of 98
Quote:
Originally Posted by Neilthran View Post

They said they started talking about it, that doesn't necessarily means they started working on it

....Uh?
Quote:
Our work with Microsoft on DirectX 12 began more than four years ago with discussions about reducing resource overhead. For the past year, NVIDIA has been working closely with the DirectX team to deliver a working design and implementation of DX12 at GDC.
post #58 of 98
Great, now we're arguing semantics now?
Boinzy 2017
(15 items)
 
  
CPUMotherboardGraphicsRAM
i5 3570k 4.2Ghz Gigabyte GA-Z77-DS3H  EVGA GeForce GTX 1060 SC GAMING, 06G-P4-6163-KR... G.SKILL Ripjaws X Series 8GB (2 x 4GB) 
Hard DriveHard DriveCoolingOS
TOSHIBA X300 HDWE160XZSTA 6TB 7200 RPM Samsung 850 Evo 250GB Cooler Master 212+ Windows 7 Ultimate 64bit 
MonitorKeyboardPowerCase
Acer GN246HL 144hz Nixeus Moda Pro Mechanical Keyboard Cherry MX B... Antec Earthwatts 650W Thermaltake Chaser A41 
MouseAudio
Logitech G400 Bose Companion 2 
  hide details  
Reply
Boinzy 2017
(15 items)
 
  
CPUMotherboardGraphicsRAM
i5 3570k 4.2Ghz Gigabyte GA-Z77-DS3H  EVGA GeForce GTX 1060 SC GAMING, 06G-P4-6163-KR... G.SKILL Ripjaws X Series 8GB (2 x 4GB) 
Hard DriveHard DriveCoolingOS
TOSHIBA X300 HDWE160XZSTA 6TB 7200 RPM Samsung 850 Evo 250GB Cooler Master 212+ Windows 7 Ultimate 64bit 
MonitorKeyboardPowerCase
Acer GN246HL 144hz Nixeus Moda Pro Mechanical Keyboard Cherry MX B... Antec Earthwatts 650W Thermaltake Chaser A41 
MouseAudio
Logitech G400 Bose Companion 2 
  hide details  
Reply
post #59 of 98
Quote:
Originally Posted by Mand12 View Post

....Uh?
Read your quote again. It says work began with discussions. Meaning that it started out as just talk. What he said was factual. Going only from this post (which may not be the entire story), it has only been a year they have been working with DX 12.
post #60 of 98
Quote:
Originally Posted by Mrzev View Post

They are saying they will have 7-15X less overhead than DX. Not 7-15X the results or polys or shaders, less overhead. % of the overall workload is the overhead?

Lets apply this logic to a TCP frame. If I got 7-15X less overhead on TCP I can save 38*.9 bytes = 34 bytes!!!! So lets add that to our old payload of 1462 , to give us 1496! Which is a really just a 2.3% increase. I know its not that clear cut, and with things like voice where their payload is 64Bytes and not 1400 , yes there will be a dramatic increase there. Not to mention the ACKs, but my point still stands.

In this day and age, no one can make huge leaps like 7-15x performance. Do you think Intel has a chip thats 5x more powerful than whats out now? If you said they have one thats 1.5 - 2X i will impressed actually.

With something that has been around for so long, and yet no one though of this architecture which would be waaaaaaay better surprises me.

This goes back along the lines of AMD's Mantel. Mantel has gains in some areas and losses in others, but have we seen anything past 2x improvement of frame-rate, no far from it actually. The ONLY time i think something like this will be possible is if AMD and Nvidia agree to standardize some stuff that would make the developers able to code more optimally like they would actual hardware. With that, i can see a 2-3x gain TOPS.

You didn't look through the slides of the presentation this article is about at all, did you? The presentation is not about future stuff... it is a guide for programmers that shows how your code should look like to best use current features of OpenGL and extensions. It is all about stuff that would work on current machines and drivers.
Edited by deepor - 3/21/14 at 2:47pm
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Software News
Overclock.net › Forums › Industry News › Software News › [nvidia's blog] GDC - NVIDIA, AMD, Intel Explain How OpenGL Can Unlock 7-15x Performance Gains