Overclock.net › Forums › Industry News › Hardware News › [ExtremeTech]CUDA/PhysX+ATI
New Posts  All Forums:Forum Nav:

[ExtremeTech]CUDA/PhysX+ATI - Page 3

post #21 of 31
Quote:
Originally Posted by Jarhead View Post
I'm very glad that they aren't. Nvidia should not be the one to set the industry standard on this one, and neither should AMD or Intel. What I'm hoping happens is that DAAMIT and Nvidia keep dickering about this until OpenCL takes the lions share of the gpgpu tasks, thereby setting a standard independent of any one company which all of them have to support. That way everybody wins, especially the consumers.
I don't really care who gets it done as long as the platforms perform close to equally and it is easy to program for. (Such as CUDA being based of C which isn't hard at all.)
Lee XT
(17 items)
 
  
CPUMotherboardGraphicsRAM
AMD FX-6300 Asus M5A97 SAPPHIRE Radeon HD 7850 AMD 4GB DDR3 1333MHZ 
RAMRAMRAMHard Drive
AMD 4GB DDR3 1333MHZ AMD 4GB DDR3 1333MHZ AMD 4GB DDR3 1333MHZ OCZ Vertex 4 256GB 
CoolingOSMonitorKeyboard
Corsair H80 Windows 8.1 Pro MCE Dell P2414H WHXV7  Microsoft Generic 
PowerCaseMouseMouse Pad
Ultra 600W Limited Edition NZXT Black Steel Razer Deathadder Razer Goliath 
Audio
Realtek HD Audio 
  hide details  
Reply
Lee XT
(17 items)
 
  
CPUMotherboardGraphicsRAM
AMD FX-6300 Asus M5A97 SAPPHIRE Radeon HD 7850 AMD 4GB DDR3 1333MHZ 
RAMRAMRAMHard Drive
AMD 4GB DDR3 1333MHZ AMD 4GB DDR3 1333MHZ AMD 4GB DDR3 1333MHZ OCZ Vertex 4 256GB 
CoolingOSMonitorKeyboard
Corsair H80 Windows 8.1 Pro MCE Dell P2414H WHXV7  Microsoft Generic 
PowerCaseMouseMouse Pad
Ultra 600W Limited Edition NZXT Black Steel Razer Deathadder Razer Goliath 
Audio
Realtek HD Audio 
  hide details  
Reply
post #22 of 31
Quote:
Originally Posted by Cheetos316 View Post
Even if CUDA were available for liscense, I doubt that AMD would be paying them for the liscensing since AMD is all about keeping platforms open and forming alliances and partnerships with those who are also reluctant to pay liscensing fees and open platforms. If push comes to shove and everyone and their mothers were using Havok's physics engine and Nvidia is losing market share due to it, I'm sure Nvidia would drop CUDA and figure out a way to get the Havok engine to work on their GPUs.
The thing is, it is open for license, for "pennies per GPU apparently". Nvidia has the open standard, and any one can write CUDA drivers, heck they even encourage ATI to adopt it. Right now Intel is almost more their enemy on this front, and having an open standard for all graphics cards is beneficial to both ATI and Nvidia.
    
CPUMotherboardGraphicsRAM
C2D T7100 1.8 ghz (undervolted) ummm... Dell Intel X3100 2 x 1gb 667mhz 
Hard DriveOptical DriveOSMonitor
Fujitsu 7200 RPM 120gb CD-RW/DVD dual boot Vista business 1440x900 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
C2D T7100 1.8 ghz (undervolted) ummm... Dell Intel X3100 2 x 1gb 667mhz 
Hard DriveOptical DriveOSMonitor
Fujitsu 7200 RPM 120gb CD-RW/DVD dual boot Vista business 1440x900 
  hide details  
Reply
post #23 of 31
So basically AMD wants to keep hardcore multithreaded processing on CPUs as long as possible--along with physics--so they're bogging Nvidia, and consumers, down by raining on their parade and playing favorites. Nice going AMD. Now go f*** yourself.
Edited by Porthios - 7/8/08 at 5:57am
Metal Case
(13 items)
 
  
CPUMotherboardGraphicsRAM
Core i7 920 ASUS GeForce GTX285 1GB 6GB Corsair 
Hard DriveOSMonitorPower
500GB Vista Ultimate 64-bit Asus 24" 850W 
  hide details  
Reply
Metal Case
(13 items)
 
  
CPUMotherboardGraphicsRAM
Core i7 920 ASUS GeForce GTX285 1GB 6GB Corsair 
Hard DriveOSMonitorPower
500GB Vista Ultimate 64-bit Asus 24" 850W 
  hide details  
Reply
post #24 of 31
Quote:
Originally Posted by Jarhead View Post
I'm very glad that they aren't. Nvidia should not be the one to set the industry standard on this one, and neither should AMD or Intel. What I'm hoping happens is that DAAMIT and Nvidia keep dickering about this until OpenCL takes the lions share of the gpgpu tasks, thereby setting a standard independent of any one company which all of them have to support. That way everybody wins, especially the consumers.

AMD hasn't called Nvidia to talk about supporting CUDA, there is no doubt in my. That would be handing Nvidia the keys to the whole kingdom. CUDA would become the de facto standard in the industry and then Nvidia would be able to charge any price, set requirements for it's use, and basically behave like any monopoly. Far better that we have the current chaos as it gives the open standards a chance to gain an early foothold.
That's no different than what Intel does with it's intellectual property, yet we still got--at least at one point--some good competition between CPU companies. The same would apply to this situation. Once Nvidia and ATI commit to either OpenCL or CUDA, then they can stop playing grabass and focus on making awesome hardware. Right now, we're stuck untieing a knot. And until that knot is untied, developers are going to be confused, and hardware advancement will not be focused on exploiting the benefits of CUDA/OpenCL. Basically, this is slowing the industry down. How about we figure out which method is best, and then just commit to the damn thing.
Metal Case
(13 items)
 
  
CPUMotherboardGraphicsRAM
Core i7 920 ASUS GeForce GTX285 1GB 6GB Corsair 
Hard DriveOSMonitorPower
500GB Vista Ultimate 64-bit Asus 24" 850W 
  hide details  
Reply
Metal Case
(13 items)
 
  
CPUMotherboardGraphicsRAM
Core i7 920 ASUS GeForce GTX285 1GB 6GB Corsair 
Hard DriveOSMonitorPower
500GB Vista Ultimate 64-bit Asus 24" 850W 
  hide details  
Reply
post #25 of 31
Quote:
Originally Posted by trueg50 View Post
The thing is, it is open for license, for "pennies per GPU apparently". Nvidia has the open standard, and any one can write CUDA drivers, heck they even encourage ATI to adopt it. Right now Intel is almost more their enemy on this front, and having an open standard for all graphics cards is beneficial to both ATI and Nvidia.
You only view it that way because Nvidia is playing dirty with their toys so things like Vantage now have PhysX.
BladeRunner v3.0
(11 items)
 
  
CPUMotherboardGraphicsRAM
Intel Core i7-5930K @ 4.6GHz Core, 4.4GHz Cache ASUS X99 Sabertooth Sapphire R9 380 Dual-X OC G.Skill TridentZ 32GB DDR4 @ 13-15-13-33-1T 320... 
Hard DriveCoolingOSKeyboard
Samsung 850 Pro 512GB Noctua NH-D15S Windows 10 Home 64-bit Logitech G910 Orion Spark 
PowerCaseMouse
EVGA SuperNova 1000W T2 NZXT Phantom 820 Black Logitech G5 
  hide details  
Reply
BladeRunner v3.0
(11 items)
 
  
CPUMotherboardGraphicsRAM
Intel Core i7-5930K @ 4.6GHz Core, 4.4GHz Cache ASUS X99 Sabertooth Sapphire R9 380 Dual-X OC G.Skill TridentZ 32GB DDR4 @ 13-15-13-33-1T 320... 
Hard DriveCoolingOSKeyboard
Samsung 850 Pro 512GB Noctua NH-D15S Windows 10 Home 64-bit Logitech G910 Orion Spark 
PowerCaseMouse
EVGA SuperNova 1000W T2 NZXT Phantom 820 Black Logitech G5 
  hide details  
Reply
post #26 of 31
Quote:
Originally Posted by stargate125645 View Post
You only view it that way because Nvidia is playing dirty with their toys so things like Vantage now have PhysX.


Thats what they say in the article (If you believe them, I am on the fence on whether it is true, or just BS). I couldn't care less about Vantage, and Physx is pretty much useless at the moment, however GPGPU work is very exciting and beneficial for everyone.

I just think that there has to be a standard for it (like with x86), and at the moment, there is none, Nvidia uses their own, ATI uses their own, and if Nvidia has the greater market share (for CUDA usage) and the better product then they should push for it to become standard. It's like having Intel and AMD making two completely different CPU's, say Intel making powerPC and AMD making x86, why should users have to choose one, and have they benefits of only one?

Screw Larrabe, how nice would it be if every ATI, S3, and Nvidia GPU had the ability to tap into GPGPU? Freaking sweet I'd say. It would also give Nvidia competition on the scientific front too, maybe they will release something other than desktop parts for those uses too.
    
CPUMotherboardGraphicsRAM
C2D T7100 1.8 ghz (undervolted) ummm... Dell Intel X3100 2 x 1gb 667mhz 
Hard DriveOptical DriveOSMonitor
Fujitsu 7200 RPM 120gb CD-RW/DVD dual boot Vista business 1440x900 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
C2D T7100 1.8 ghz (undervolted) ummm... Dell Intel X3100 2 x 1gb 667mhz 
Hard DriveOptical DriveOSMonitor
Fujitsu 7200 RPM 120gb CD-RW/DVD dual boot Vista business 1440x900 
  hide details  
Reply
post #27 of 31
Quote:
Originally Posted by trueg50 View Post


Thats what they say in the article (If you believe them, I am on the fence on whether it is true, or just BS). I couldn't care less about Vantage, and Physx is pretty much useless at the moment, however GPGPU work is very exciting and beneficial for everyone.

I just think that there has to be a standard for it (like with x86), and at the moment, there is none, Nvidia uses their own, ATI uses their own, and if Nvidia has the greater market share (for CUDA usage) and the better product then they should push for it to become standard. It's like having Intel and AMD making two completely different CPU's, say Intel making powerPC and AMD making x86, why should users have to choose one, and have they benefits of only one?

Screw Larrabe, how nice would it be if every ATI, S3, and Nvidia GPU had the ability to tap into GPGPU? Freaking sweet I'd say. It would also give Nvidia competition on the scientific front too, maybe they will release something other than desktop parts for those uses too.
There is no reason for Vantage to have gone with PhysX over anything else, let alone not have DX10.1. If Nvidia had nothing to do with it then they are the lucky enough to win the lottery 10 times over. And then you have the whole Assassin's Creed bit. Something happened behind the scenes, and it'd be naive not to think so.

I have no problem with a standard, but Nvidia pushing PhysX is going in the wrong direction. If all you cared about were GPGPU abilities, then it shouldn't matter to you whether there is CUDA from Nvidia or whatever ATI has developed and it is only helping to have two competing forms as each company has to make theirs better to compete. But you are pushing CUDA as the necessary standard, and it doesn't need to be.
Edited by stargate125645 - 7/8/08 at 7:38am
BladeRunner v3.0
(11 items)
 
  
CPUMotherboardGraphicsRAM
Intel Core i7-5930K @ 4.6GHz Core, 4.4GHz Cache ASUS X99 Sabertooth Sapphire R9 380 Dual-X OC G.Skill TridentZ 32GB DDR4 @ 13-15-13-33-1T 320... 
Hard DriveCoolingOSKeyboard
Samsung 850 Pro 512GB Noctua NH-D15S Windows 10 Home 64-bit Logitech G910 Orion Spark 
PowerCaseMouse
EVGA SuperNova 1000W T2 NZXT Phantom 820 Black Logitech G5 
  hide details  
Reply
BladeRunner v3.0
(11 items)
 
  
CPUMotherboardGraphicsRAM
Intel Core i7-5930K @ 4.6GHz Core, 4.4GHz Cache ASUS X99 Sabertooth Sapphire R9 380 Dual-X OC G.Skill TridentZ 32GB DDR4 @ 13-15-13-33-1T 320... 
Hard DriveCoolingOSKeyboard
Samsung 850 Pro 512GB Noctua NH-D15S Windows 10 Home 64-bit Logitech G910 Orion Spark 
PowerCaseMouse
EVGA SuperNova 1000W T2 NZXT Phantom 820 Black Logitech G5 
  hide details  
Reply
post #28 of 31
Quote:
Originally Posted by stargate125645 View Post
There is no reason for Vantage to have gone with PhysX over anything else, let alone not have DX10.1. If Nvidia had nothing to do with it then they are the lucky enough to win the lottery 10 times over. And then you have the whole Assassin's Creed bit. Something happened behind the scenes, and it'd be naive not to think so.
Very true, I think it kind of proves that a lot of benchmarks really don't matter, and can easily be manipulated. Personally I think that if the performance or feel of a game is improved, then I think the benchmark should reflect that. Maybe they shouldn't have released physx for Vantage, but that is another debate that could go either way ethics wise.
Quote:

I have no problem with a standard, but Nvidia pushing PhysX is going in the wrong direction. If all you cared about were GPGPU abilities, then it shouldn't matter to you whether there is CUDA from Nvidia or whatever ATI has developed and it is only helping to have two competing forms as each company has to make theirs better to compete. But you are pushing CUDA as the necessary standard, and it doesn't need to be.
Well, there are two incompatable standards, with CUDA being widely adopted. Is there any info on what folks have adopted ATI's GPGPU language? I know it is AMD's nature not to really advertise, but I haven heard much on it, either for secrecy, or lack of adoption I would really like to know.

I haven't heard much on it, which led me to think that perhaps CUDA should be pursued more.

It is just like teasing a dog with a piece of meat. Since developers have a tough enough time developing for either language, much less both; we have Nvidia dangling Physx in front of us one minute, then ATI dangling +1 teraflop performance the next. Then back to Nvidia with Badabom and wider use of CUDA.

We all know what happens in format wars, and the consumer is rarely the winner.
Edited by trueg50 - 7/8/08 at 7:55am
    
CPUMotherboardGraphicsRAM
C2D T7100 1.8 ghz (undervolted) ummm... Dell Intel X3100 2 x 1gb 667mhz 
Hard DriveOptical DriveOSMonitor
Fujitsu 7200 RPM 120gb CD-RW/DVD dual boot Vista business 1440x900 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
C2D T7100 1.8 ghz (undervolted) ummm... Dell Intel X3100 2 x 1gb 667mhz 
Hard DriveOptical DriveOSMonitor
Fujitsu 7200 RPM 120gb CD-RW/DVD dual boot Vista business 1440x900 
  hide details  
Reply
post #29 of 31
Quote:
Originally Posted by trueg50 View Post
Very true, I think it kind of proves that a lot of benchmarks really don't matter, and can easily be manipulated. Personally I think that if the performance or feel of a game is improved, then I think the benchmark should reflect that. Maybe they shouldn't have released physx for Vantage, but that is another debate that could go either way ethics wise.


Well, there are two incompatable standards, with CUDA being widely adopted. Is there any info on what folks have adopted ATI's GPGPU language? I know it is AMD's nature not to really advertise, but I haven heard much on it, either for secrecy, or lack of adoption I would really like to know.

I haven't heard much on it, which led me to think that perhaps CUDA should be pursued more.

It is just like teasing a dog with a piece of meat. Since developers have a tough enough time developing for either language, much less both; we have Nvidia dangling Physx in front of us one minute, then ATI dangling +1 teraflop performance the next. Then back to Nvidia with Badabom and wider use of CUDA.

We all know what happens in format wars, and the consumer is rarely the winner.
You only say CUDA is being widely adopted because it is being publicized due to PhysX, and the fact that Nvidia is pushing PhysX onto everything they can. You cannot argue it is being widely adopted by the community for any other reason yet.
BladeRunner v3.0
(11 items)
 
  
CPUMotherboardGraphicsRAM
Intel Core i7-5930K @ 4.6GHz Core, 4.4GHz Cache ASUS X99 Sabertooth Sapphire R9 380 Dual-X OC G.Skill TridentZ 32GB DDR4 @ 13-15-13-33-1T 320... 
Hard DriveCoolingOSKeyboard
Samsung 850 Pro 512GB Noctua NH-D15S Windows 10 Home 64-bit Logitech G910 Orion Spark 
PowerCaseMouse
EVGA SuperNova 1000W T2 NZXT Phantom 820 Black Logitech G5 
  hide details  
Reply
BladeRunner v3.0
(11 items)
 
  
CPUMotherboardGraphicsRAM
Intel Core i7-5930K @ 4.6GHz Core, 4.4GHz Cache ASUS X99 Sabertooth Sapphire R9 380 Dual-X OC G.Skill TridentZ 32GB DDR4 @ 13-15-13-33-1T 320... 
Hard DriveCoolingOSKeyboard
Samsung 850 Pro 512GB Noctua NH-D15S Windows 10 Home 64-bit Logitech G910 Orion Spark 
PowerCaseMouse
EVGA SuperNova 1000W T2 NZXT Phantom 820 Black Logitech G5 
  hide details  
Reply
post #30 of 31
Quote:
Originally Posted by stargate125645 View Post
You only say CUDA is being widely adopted because it is being publicized due to PhysX, and the fact that Nvidia is pushing PhysX onto everything they can. You cannot argue it is being widely adopted by the community for any other reason yet.
I personally have been very excited about CUDA since I got an 8600GTS last summer. Heck, I got so excited my roommate and I started watching video lectures put on by a university, the entire course was on CUDA. Didn't watch the whole thing, but most of them were pretty interesting; one of the instructors was a higher-up engineer at Nvidia's CUDA department.

You are right, they are pushing Physx pretty hard (something very odd to see, considering there are not really any games that support it!).

I think though that lots of people are starting to see what it can really do and are getting excited about CUDA because Physx is the only thing they can do with CUDA. Folding, Badaboom, aren't really popular yet, however they do give people a reason to get excited, Physx is just something Nvidia is trying to hype.

It also depends on who your talking to, technology folks are going to like the GPGPU potential and actually know stuff, while the gamers will like the Physx and usually not know what CUDA is, only what physx is (and usually aren't terribly brilliant).
Edited by trueg50 - 7/8/08 at 8:27am
    
CPUMotherboardGraphicsRAM
C2D T7100 1.8 ghz (undervolted) ummm... Dell Intel X3100 2 x 1gb 667mhz 
Hard DriveOptical DriveOSMonitor
Fujitsu 7200 RPM 120gb CD-RW/DVD dual boot Vista business 1440x900 
  hide details  
Reply
    
CPUMotherboardGraphicsRAM
C2D T7100 1.8 ghz (undervolted) ummm... Dell Intel X3100 2 x 1gb 667mhz 
Hard DriveOptical DriveOSMonitor
Fujitsu 7200 RPM 120gb CD-RW/DVD dual boot Vista business 1440x900 
  hide details  
Reply
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Hardware News
Overclock.net › Forums › Industry News › Hardware News › [ExtremeTech]CUDA/PhysX+ATI