Well, looks like I have another PhysX title to enjoy.
Originally Posted by Mopar63
Originally Posted by Seven7h
Dead wrong. PhysX is completely unlike anything that it was 6-7 years ago. It's had lots of new effects, and been mostly rewritten. APEX API and effects were all absolutely invented under NVIDIA. Cloth, hair, fluid, etc. The stuff that was there before was particles and rigid bodies. Even the stuff that was there when they were purchased has been completely rewritten. It is always evolving, and NVIDIA is paying those salaries. There have been PhysX effects/tech that have originated from engineers that were always NVIDIA employees too. At this point it's fair to say that most everything was developed under NVIDIA rather than Ageia.
This is partially incorrect, the base for everything in PhysX was developed before NVidia bought them. Invidia then took that work and moved it onto the Cuda platform and optimized it. Not saying NVidia did not move it forward leaps and bounds but they did not "invent" it. It's the same with SLI, the entire concept was actually taken from the tech developed by 3DFX after they where bought by NVidia. Funny thing is for years NVidia told everyone SLI was a gimmick. Then they reintroduce it and claim to have created it.
This is partially incorrect.
3DFX had Scan-Line Interleaving, was prone to a variety of old school issues such as PCI comms, IRQ interrupts, space, heat, driver support, no application support, poor scaling.
nVidia have Scalable Link Interface, probably inspired by earlier 3DFX idea, of multiple video cards but it is of a completely different technology.
Scan-Line Interleaving, was a niche product, with next to zero market penetration, thus making it by definition a gimmick.
Next the entire why does AMD not do this or that, the reason in the end is money and they do not have it. NVidia dumps a LOT of money every year into the gaming world just to make sure developers are using their products. Even with all the money they dump however you can see only a fraction of developers get onboard.
While I agree PhysX is not in many titles, but in all honesty, I don't see Havok running around in every other title. I think you are forgetting what was actually been driving the last 6 years of gaming... consoles. Weak anemic consoles (by PC standards), whose hardware has predated the whole PhysX / OpenCL standard hardware push. If you must know in addition, PhysX has been licensed by both Sony and Microsoft for their respective consoles. While the details are sparse, I anticipate it to be a CPU powered API base. That being said, we will hopefully see a lot more utilization of physics in games.
Finally lets get to the question of why AMD does not have support. The myth is that AMD was offered PhysX by NVidia, WRONG. in 2008 a group outside of either company began work on a project to see if Cuda and thus PhysX could be implemented on AMD cards. The group sent messages to AMD and NVidia asking for help. In AMDs case they wanted information on the direct chip programming as well as AMD to supply them with video cards. AMD declined for a number of reasons but mostly because they expected NVidia to shut down the project .
Now it is important to note the timing of the events here. The group, after all this has played out with AMD is suddenly and very quietly given and offer to help with this development from NVidia but since AMD will not play ball the point is mute. The group goes public with this but the matter is already settled.
NVidia did not offer anything of value when you read between the lines. They knew AMD would not work with this before they offered to help. Up until that point they had not bothered to respond at all. This was a "political" move to let them look like they held the high ground, the problem is any sod with a brain knows it for what it is. If NVidia was open to this why do NVidia drivers specifically target AMD cards to shut of PhysX support on an NVidia card if an AMD card is present? Why not develop PhysX to work with a stand alone PhysX card? Why are their AIB partners forbidden to sell an NVidia chip on a card designed for nothing but a PhysX or CUDA co-processor?
This is a lot of speculation, unless you have a source.
Now finally NVidia keeps PhysX to itself because they are out to make money, only partially true. You see right now PhysX does NOT make any money. In fact PhysX costs NVidia money, they have to practically pay developers for it's use instead of more open alternatives. If they even offered a lightly priced licensing for a very limited form of Cuda to just make use of PhysX they would make more money from PhysX being used. The issue however is not PhysX it is CUDA. CUDA is much like what AMD fought with Intel when it came to optimized software, if you pull the GPU computing out of CUDA and into OpenCL or DirectCompute something wild happens, NVidia gets spanked.
I think you are confused. The cards make nVidia money. Not the drivers, nor CUDA. The cards. You argument is analogous to tires not making a car company money. It's a part of the car. It is up to nVidia to thus show us the value of parts of the whole. Marketing is one way, support for developers is an even bigger one to me.
In reference to nVidia getting spanked, you must not remember the performance of the GTX 580, the Quadro lineup, or the Tesla lineup. If you want to be correct and unbiased what you really should have said: AMD on their consumer gaming video cards, AMD has better non-gaming performance. Which is true of this current generation but not every generation by far.
CUDA and OpenCL are equally fast when coded for the targeted architecture:
If you must know, the openCL standard for a long time, was buggy and unstable.
Although that has changed in recent years, CUDA was available for a longer time, in terms of professional stability. An example was Adobe dropping OpenCL support in CS5, but retained CUDA support.
Since these are math languages for GPU based applications, the performance really is limited by the card. If it doesn't run well, as say another language on the same card, then that is due to the programmer.
CUDA was designed as a GPGPU programming language to make it easier on developers to try new things and open the market of their cards for flexible application.
On a level playing field in the world of GPU computing , a world that admittedly NVidia helped to form, they are not the leaders. The only lead they have is through a proprietary source of coding, open the coding and suddenly they are second tier.
I am sorry but this is false. They are the financial GPU leaders (even with approximately 80% the manpower as AMD) of the entire industry. Windows isn't open source either, that would automatically make them second rate in your book. You haven't really provided anything of much solid fact as you do a distrust of closed standard operations. Which is how most of the community is actually run.
Companies making success off of their own hard work, money, and R&D. They don't owe you or anyone else their source code. If you don't like that, and prefer AMDs style, then that's fine. You don't have to fantasize about an evil entity that is nVidia and make up a story to convince yourself. AMD is responsible for it's own success and failure, its about time their fans have realized that. Nothing is preventing AMDs success but AMD, has nothing to do with what nVidia does or doesn't do, or which standards it uses.Edited by RagingCain - 8/14/13 at 8:25am