Originally Posted by Porthios
Yeah, I think that is just you. Have you ever tried playing a game that was designed, from the bottom up for physics hardware and then tried playing it with CPU/software mode? It's a slide show, each and every single time. Don't believe me? Go play in the Crysis editor and stack up 500 barrels and then see what happens when they start to tumble. Go download Endorphin, from natural motion and then start to play around with some of the more complex physics scenarios. Go check out some of the CPU vs. GPU comparisons in physics demonstrations. Jesus Christ, how you can have not seen the enormous leaps in performance when we switch over to GPU physics processing? This is about the most uneducated post in this entire thread, and it was made by a mod, no less. Go get educated.
I hope that GPU physics processing will someday soon allow us to go far beyond the equivalent physics of 500 barrels. But that's something that the CPUs of today will not allow us to do without taking severe performance hits.
You're wrong and I agree with ENTERPRISE.
First of all most games designed with physics acceleration where mostly marketing for Ageia Cards. Plenty other games like HL2 had decent physics that looked realistic and ran well on single core CPUs.
I'm yet to see one game with PhysisX that blew me away. Also, early reports said running hardware physics actually decreased FPS in games.
The only game where it ran well was Cell Factor but that game was heavily sponsored by Ageia.
But in comes nVidia and suddenly PhysiX is magically the way to go, a lot of great games are using it, bla, bla, bla...
Nonsense! There is plenty to be done on the CPU. GPUs are already overstressed so dumping more workload on them is not going to offer much performance increase if any. We need to balance the workload not dump all of it in one place (which is exactly what nVidia wants so they can come and claim how the CPU is obsolete and how their vision of a future with no CPUs is the best.)
P.S Don't call people uneducated when you say stuff like this: "Jesus Christ, how you can have not seen the enormous leaps in performance when we switch over to GPU physics processing?"
What enormous leaps in performance, where? I haven't seen any - except 3dMark Vantage scores which mean close to nothing in real world applications where physics and 3d rendering are being done at the same time and not separately.