Overclock.net › Forums › Industry News › Hardware News › [TI] AMD thinks most programmers will not use CUDA or OpenCL
New Posts  All Forums:Forum Nav:

[TI] AMD thinks most programmers will not use CUDA or OpenCL - Page 5

post #41 of 64
Quote:
Originally Posted by BizzareRide View Post

This is where specialized/closed-source software can really make a difference. Most developers don't use CUDA or OpenCL, but at least Nvidia is profiting from those that do use CUDA. Not so with AMD and OCL.

huh??
AMD earns profit on sales of new gpu's to be used for OpenCL.
Nvidia earns profit on sales of new gpu's to be used for either CUDA or OpenCL.
Edited by Partol - 3/26/13 at 4:42pm
     
CPUMotherboardGraphicsRAM
Athlon II X4 750k Asus F2A85-V MSI GTX 750 Ti Team 2x2GB 
Hard DriveCoolingOSMonitor
Kingston SSD V300 120GB Zalman CNPS5X performa + Arctic MX-4 Windows 7 64-bit Asus VW222 22" LCD 1680x1050 
PowerCase
FSP 460W Foxconn TSAA-426 
CPUMotherboardGraphicsRAM
Pentium G3258 @ 4.4 GHz ~1.38Vcore ~1.9Vccin Ri... MSI H81M-P33 Asus GTX 750 Ti OC + Thermalright HR-11 backsid... Transcend 2x2GB @ 1400MHz (5-7-5-18) ~1.62V 
Hard DriveCoolingOSMonitor
Intel SSD 330 240GB ZM CNPS5X (not performa) + 120mm fan (push) Windows 7 64bit Acer GD245HQ - 1920x1080 120Hz - Nvidia 3D Vision 
PowerCaseAudio
FSP 500W Enermax Phoenix Neo + 220mm Globefan (side) Creative Titanium HD + Beyer DT 880 - 250 ohm 
  hide details  
Reply
     
CPUMotherboardGraphicsRAM
Athlon II X4 750k Asus F2A85-V MSI GTX 750 Ti Team 2x2GB 
Hard DriveCoolingOSMonitor
Kingston SSD V300 120GB Zalman CNPS5X performa + Arctic MX-4 Windows 7 64-bit Asus VW222 22" LCD 1680x1050 
PowerCase
FSP 460W Foxconn TSAA-426 
CPUMotherboardGraphicsRAM
Pentium G3258 @ 4.4 GHz ~1.38Vcore ~1.9Vccin Ri... MSI H81M-P33 Asus GTX 750 Ti OC + Thermalright HR-11 backsid... Transcend 2x2GB @ 1400MHz (5-7-5-18) ~1.62V 
Hard DriveCoolingOSMonitor
Intel SSD 330 240GB ZM CNPS5X (not performa) + 120mm fan (push) Windows 7 64bit Acer GD245HQ - 1920x1080 120Hz - Nvidia 3D Vision 
PowerCaseAudio
FSP 500W Enermax Phoenix Neo + 220mm Globefan (side) Creative Titanium HD + Beyer DT 880 - 250 ohm 
  hide details  
Reply
post #42 of 64
Nvidia profits either way, if this gen does poorly in OCL you will see next gen improve. Or they improve driver performance in relationship to OCL, nvidia always profits in this. AMD only profits for OCL/HSA, oddly Nvidia still profits from HSA. Profit, profit, profit. That's their game, they do it well.
Current Rig
(14 items)
 
  
CPUMotherboardGraphicsRAM
FX-8350 4.6GHz@1.44v GA-990FXA-UD3 R4.0 HD 7950 (1100/1450) 8G Muskin DDR3 1866@8CLS 
Hard DriveOptical DriveOSMonitor
1TB WD LiteOn DVD-RW DL Linux/Windows 19" Phillips TV 1080p 
PowerCaseMouseMouse Pad
OCZ 600W Generic Junk Logitech MX400 Generic Junk 
Audio
SBL 5.1 
  hide details  
Reply
Current Rig
(14 items)
 
  
CPUMotherboardGraphicsRAM
FX-8350 4.6GHz@1.44v GA-990FXA-UD3 R4.0 HD 7950 (1100/1450) 8G Muskin DDR3 1866@8CLS 
Hard DriveOptical DriveOSMonitor
1TB WD LiteOn DVD-RW DL Linux/Windows 19" Phillips TV 1080p 
PowerCaseMouseMouse Pad
OCZ 600W Generic Junk Logitech MX400 Generic Junk 
Audio
SBL 5.1 
  hide details  
Reply
post #43 of 64
Quote:
Originally Posted by alcal View Post

I was involved in a small project getting some cuda code to run, and tbh, for any sort of bigger project, it takes more than a little tweak to get things to work--a lot more. We were doing Conway's Game of Life and that took a little bit to figure out on it's own (and that's a pretty simple program).

I just searched for "Conway's Game of Life" at Google and it has a simulation inside of the search!
Nerfed
(13 items)
 
  
CPUMotherboardGraphicsRAM
i7 2600K 4~5.0 GHz ASUS P8Z68-V/GEN3 Gigabyte 7770 OC 1GB G.Skill Sniper 2x4GB DDR3-1600 1.25V 
Hard DriveHard DriveHard DriveCooling
Corsair M4 128GB Seagate 7200.14 3TB (RAID-1) Toshiba DT01ACA300 3TB (RAID-1) Silver Arrow 
OSMonitorPowerCase
W7 64 24" Dell E248WFP Antec EarthWatts Green 380W (80+ Bronze) Enermax iVektor 
Mouse
Logitech G9 
  hide details  
Reply
Nerfed
(13 items)
 
  
CPUMotherboardGraphicsRAM
i7 2600K 4~5.0 GHz ASUS P8Z68-V/GEN3 Gigabyte 7770 OC 1GB G.Skill Sniper 2x4GB DDR3-1600 1.25V 
Hard DriveHard DriveHard DriveCooling
Corsair M4 128GB Seagate 7200.14 3TB (RAID-1) Toshiba DT01ACA300 3TB (RAID-1) Silver Arrow 
OSMonitorPowerCase
W7 64 24" Dell E248WFP Antec EarthWatts Green 380W (80+ Bronze) Enermax iVektor 
Mouse
Logitech G9 
  hide details  
Reply
post #44 of 64
The point I think AMD is making is that it makes more sense for game developers and developers in general to make use of OpenCL or for gaming DirectCompute than it does for CUDA. CUDA is a solid package and nVidia should get create for really sparking the GPU computing revolution but at the end of the day letting one chip maker control the development platform is not going to work.
post #45 of 64
Quote:
Originally Posted by Partol View Post

huh??
AMD earns profit on sales of new gpu's to be used for OpenCL.
Nvidia earns profit on sales of new gpu's to be used for either CUDA or OpenCL.

OpenCL works on Nvidia as well, so that isn't an advantage to AMD and before you mention AMDs newfound inferences on compute, Fermi and Tahiti XT trade blows in terms of compute in several areas. When developers need support and services, there is no AMD-OpenCL equivalent to CUDA, so what benefit is there for most developers to chose AMD or OCL?

It boils down to closed- vs. open-source and the former's ability to deliberately implement technology into tangible products, not merely be "compatible".
Po' Pimpin'
(11 items)
 
  
CPUMotherboardRAMHard Drive
i5 2500k @ stock Biostar TZ68K+ [A3] 4GB  Sandforce 1222 64GB SSD 
Optical DriveCoolingOSMonitor
LG 22x DVD-+RW  Stock Windows 7 x64 Acer S211HL 1080p 
PowerCaseMouse
600w Diablotek Linkworld Electronic Inland 
  hide details  
Reply
Po' Pimpin'
(11 items)
 
  
CPUMotherboardRAMHard Drive
i5 2500k @ stock Biostar TZ68K+ [A3] 4GB  Sandforce 1222 64GB SSD 
Optical DriveCoolingOSMonitor
LG 22x DVD-+RW  Stock Windows 7 x64 Acer S211HL 1080p 
PowerCaseMouse
600w Diablotek Linkworld Electronic Inland 
  hide details  
Reply
post #46 of 64
Quote:
Originally Posted by BizzareRide View Post

so what benefit is there for most developers to chose AMD or OCL?

The benefit is ... OpenCL runs on both AMD and Nvidia hardware.
CUDA does not run on AMD hardware.
It's that simple. I dont understand why you ask such a question, since you already know the answer.

Developers develop for the end-user, not simply for themselves (except in cases where the developer is the end-user or in cases where the developer does not care about some end-users).

I like how Nvidia handled PhysX. PhysX runs on nvidia gpu's, but can also run on the cpu.
Nvidia owners have gpu PhysX. AMD owners experience (lower performance) cpu PhysX.
Maybe if Nvidia makes a cpu version of CUDA, then more developers will embrace it.
Of course, I am already aware that CUDA on cpu would run much much slower than on gpu, but it's better than not working at all (for AMD owners).

is there already a cpu version of CUDA?
Edited by Partol - 3/26/13 at 6:21pm
     
CPUMotherboardGraphicsRAM
Athlon II X4 750k Asus F2A85-V MSI GTX 750 Ti Team 2x2GB 
Hard DriveCoolingOSMonitor
Kingston SSD V300 120GB Zalman CNPS5X performa + Arctic MX-4 Windows 7 64-bit Asus VW222 22" LCD 1680x1050 
PowerCase
FSP 460W Foxconn TSAA-426 
CPUMotherboardGraphicsRAM
Pentium G3258 @ 4.4 GHz ~1.38Vcore ~1.9Vccin Ri... MSI H81M-P33 Asus GTX 750 Ti OC + Thermalright HR-11 backsid... Transcend 2x2GB @ 1400MHz (5-7-5-18) ~1.62V 
Hard DriveCoolingOSMonitor
Intel SSD 330 240GB ZM CNPS5X (not performa) + 120mm fan (push) Windows 7 64bit Acer GD245HQ - 1920x1080 120Hz - Nvidia 3D Vision 
PowerCaseAudio
FSP 500W Enermax Phoenix Neo + 220mm Globefan (side) Creative Titanium HD + Beyer DT 880 - 250 ohm 
  hide details  
Reply
     
CPUMotherboardGraphicsRAM
Athlon II X4 750k Asus F2A85-V MSI GTX 750 Ti Team 2x2GB 
Hard DriveCoolingOSMonitor
Kingston SSD V300 120GB Zalman CNPS5X performa + Arctic MX-4 Windows 7 64-bit Asus VW222 22" LCD 1680x1050 
PowerCase
FSP 460W Foxconn TSAA-426 
CPUMotherboardGraphicsRAM
Pentium G3258 @ 4.4 GHz ~1.38Vcore ~1.9Vccin Ri... MSI H81M-P33 Asus GTX 750 Ti OC + Thermalright HR-11 backsid... Transcend 2x2GB @ 1400MHz (5-7-5-18) ~1.62V 
Hard DriveCoolingOSMonitor
Intel SSD 330 240GB ZM CNPS5X (not performa) + 120mm fan (push) Windows 7 64bit Acer GD245HQ - 1920x1080 120Hz - Nvidia 3D Vision 
PowerCaseAudio
FSP 500W Enermax Phoenix Neo + 220mm Globefan (side) Creative Titanium HD + Beyer DT 880 - 250 ohm 
  hide details  
Reply
post #47 of 64
Quote:
Originally Posted by A Bad Day View Post

I came across this interesting programming language: http://en.wikipedia.org/wiki/ParaSail_%28programming_language%29

http://www.technologyreview.com/news/424836/new-language-for-programming-in-parallel/

Essentially the compiler is designed to do the multi-threading task instead of the programmer.
A programming language will not replace inability of a SW eng, or PhD, to design mutithreaded program because he received university tile without talent and unpaid experience gained outside of university.
Quote:
Originally Posted by DuckieHo View Post

...the issue is how many people have the knowledge and skill to implement it? An developer with 10+ years experienced with GPGPU skills will cost easily $100K a year.
I always wanted at most 600 $/mo. (But of course, they'd need to accept my poor health including 2/3 of year not be at work.) People wants these who makes excuse and can hide behind titles, not these with skills.
Quote:
Multi-threading is hard especially once you go on problems that are not "embarrassing parallel".
When you have talent, it's trivial. And when you don't, you shouldn't do the job.
Quote:
Developers still have to understand GPU architecture to optimize code.
Or write a prototype and do trial and error and have correct information.
Quote:
Design and architecture is hard so sub-optimal programmers don't do advance design....
It just takes 3/4 years during unemployed period when an aspiring SW architect works hard on designing, writing, prototyping and debugging stuff designed to teach him what he personally thinks he needs to learn. As long as he has reasonably high IQ, all he needs is a passion. Design and architecture are actually somewhat easy, but people can't apply three things they learned in school, they need to self educate in a lot of stuff. (Some not even programming related.)
Quote:
Furthermore, multi-threading can be very hard for non-trivial applications. Testing multi-threading is even harder. It comes down to cost-benefit.
Actually, one of last project could be described: pointer on pointer on pointer on pointer, all data driven and possible bugs and typos could only appear when it would process several MB of data (somewhere in 2GB of RAM, at correct place, completely genuinely looking, but not working). That was much harder to test than a multithreaded program.
post #48 of 64
Quote:
Originally Posted by ZealotKi11er View Post

Quote:
Originally Posted by DuckieHo View Post

Only on certain types of workloads....



Intel's Xeon Phi implementation and mostly maintaining code compatibility makes it looks pretty good....

Cant you redesign the apps to take full advantage of the GPU or you still the the CPU processing power?

In the end the CPU still pre-renders frames to be sent to the GPU. Most basic GUI draws are so easy to calculate that it can actually worsen the performance. As the saying goes; CPUs for limited small/big tasks, GPUs for many small tasks.
post #49 of 64
Quote:
Originally Posted by Partol View Post

I like how Nvidia handled PhysX. PhysX runs on nvidia gpu's, but can also run on the cpu.

You do know that nVidia has the CPU version basically broken and it does not make use of multicore CPUs effectively and the cosing is pure crap for optimization right? I seem to recall reading from a programmer somewhere that if nVidia optimized the code and added multi-core funcationality as it should have there would be a boost of a little over double if not half again in performance offf the CPU.
post #50 of 64
Quote:
Originally Posted by Mopar63 View Post

recall reading from a programmer somewhere that if nVidia optimized the code
Do you have a link?
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Hardware News
Overclock.net › Forums › Industry News › Hardware News › [TI] AMD thinks most programmers will not use CUDA or OpenCL