Overclock.net banner
1 - 9 of 61 Posts

· Premium Member
Joined
·
67,312 Posts
Using CUDA probably would have been easier and faster to develop.... and much much much more powerful.

48 ATI shaders at 500MHz... HA HA...
 
  • Rep+
Reactions: Icekilla

· Premium Member
Joined
·
67,312 Posts
Quote:

Originally Posted by usapatriot View Post
CUDA is a sham, and everybody knows it, it's been proven many times already, so please stop talking out of your rear-end. Thanks.
Do you have a better parallel optimized API that I could use? Is OpenCL finalized yet?

Please provide your many times of proof before insinuating ass talk.

Quote:

Originally Posted by BizzareRide View Post
I think you missed this part:


You missed a part too.... Why bother? When there are parallel programming APIs like CUDA and Stream avaliable? Xenos is a 500MHz 48 shader ATI GPU. He could have used Stream and just programmed on a better ATI GPU. A $85 HD4850 has 18 times the processing power. You know what's an even easier platform to develop for? A PC.

Quote:
"This is a highly effective way of carrying out high end parallel computing on "domestic" hardware for cardiac simulations. Although major reworking of any previous code framework is required, the Xbox 360 is a very easy platform to develop for and this cost can easily be outweighed by the benefits in gained computational power and speed, as well as the relative ease of visualization of the system." However his research does have some bad news for a particular set of cardiac researchers in that his study demonstrates that it is impossible to predict the rise of certain dangerous arrhythmias, as he has shown that cardiac cell models are affected by a specific limitation of computational systems known as the Halting problem.
 
  • Rep+
Reactions: That_guy3

· Premium Member
Joined
·
67,312 Posts
If this was just an exercise in doing something interesting, cool... but he wasted time when there are already developments tools for vastly more powerful hardware avaliable.
 
  • Rep+
Reactions: That_guy3

· Premium Member
Joined
·
67,312 Posts
Quote:


Originally Posted by Kuntz
View Post

Ehhhh Duckie, I don't think you are following why they chose to do it this way. You claim they could buy a 4850 for $85, this is incorrect, since this card needs to be plugged into something that is also running Windows Vista. Suddenly your $85 is now $1,000 per machine.

The reason the Xbox 360 is better is because it's a complete computer for $199, which includes an Operating System and Development Library built into the cost. If you read the article, you'd note that a big part of this research was the visual output of the math being done to a Monitor/TV screen.

You are assuming that a professor at a University does not have an existing PC.... I would bet $1000 that he does have access to a PC. Even if it did cost $1000, the PC would be 20-40 times more powerful for 4-5 times the cost.
 
  • Rep+
Reactions: JonesSoda

· Premium Member
Joined
·
67,312 Posts
Quote:

Originally Posted by trueg50 View Post
'bout time the developers get some love!

Though it is odd, at my work, the developers are always given way overpowered equipment (like 3.0 ghz C2Q's w/ 4gb of RAM and 7650GT's to replace 3.2ghz Pentium D's w/ 4gb of RAM and 7650GT's).
Don't get me started.... the developers have been complaining about it for years.

Portfolio managers and traders get three LCDs and PC upgrades first. The most complex things that they ever run are things that we built for them....
 

· Premium Member
Joined
·
67,312 Posts
Quote:


Originally Posted by AMD+nVidia
View Post

Wait what?

I kick off my code.... and have nothing to do for 10 mins except surf the net.

Quote:


I pretty sure you just insulted yourself


How? Management doesn't think developers should get the best PCs.... even though we need them since we are creating the apps for everyone.
 

· Premium Member
Joined
·
67,312 Posts
Quote:


Originally Posted by BizzareRide
View Post

48 x 500MHz x 34,000,000 = 816,000,000,000 floating point operations per second

Developing a GPU client for distribution over Xbox Live would be cheaper and more productive than buying a thousand, 216 core nVidia GPUs.

Where does it say this is distributed system?
 

· Premium Member
Joined
·
67,312 Posts
Quote:


Originally Posted by r4zr
View Post

OK, why does everyone always assume that he could have even made use of the extra power of a nVidia card?

I have two options: one system that costs $200 (a) and one that is 40x as powerful but costs $1000(b).

The job will take 20 minutes on a.

Why would I buy b?
Why would I rent b?

This is what this article is saying. If you don't need the power, there is a cheaper way.

A $50USD 9600GT is more 2-4 times more powerful than a $200 XBox GPU.

cheaper + more powerful = win.
 

· Premium Member
Joined
·
67,312 Posts
Quote:


Originally Posted by r4zr
View Post

Sort of. Depends on what you have. And this is what HE said. This paper was all about GPGPU computing! He even mentions CUDA. The point of the paper was that there are tons of GPUs to be had for very cheap, and you could use whatever was cost effective for you.

Before making claims about this, its helpful to read the whole paper as published. I have. Here is the link, full text in .PDF.

http://research.microsoft.com/pubs/79271/turing.pdf

Thanks, I just did a quick read-over... the research was done over a year ago... probably started 2-3 years ago. That means CUDA and GPGPU APIs were just being to become avaliable. Therefore at the time, his prior experience with XBox and lack of other tools were the reasons why he choose that platform.
 
1 - 9 of 61 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top