Overclock.net banner
41 - 60 of 61 Posts

· Premium Member
Joined
·
11,488 Posts
 

· Premium Member
Joined
·
11,488 Posts
..
 

· B.O.W
Joined
·
28,744 Posts
Quote:

Originally Posted by voice View Post


and your point is...?

can't wait for someone to screenshot and circle my nvidia logo


and your point is?

can't wait for someone to screenshot and circle my nvidia ATi logo
LL
 

· !
Joined
·
18,131 Posts
Quote:

Originally Posted by Kuntz View Post
Alright, a few of you took the joke way too far.
And your point is....
kidding.

I think being able to parallel process with Xboxes is actually fine. It can be used to supplement processing power.
(Kinda like folding on a PC, and also on a PS3)

Using Xboxes as the mainstay for PP however, isn't very cost effective as stated many times already
 

· Registered
Joined
·
5,270 Posts
Quote:

Originally Posted by usapatriot View Post
CUDA is a sham, and everybody knows it, it's been proven many times already, so please stop talking out of your rear-end. Thanks.
I haven't finished reading the thread so I'm sure you've had your butt handed to you several times already but I'll add this, if anyone wants to tell DuckieHo he's talking out his rear you better have reams of data in the same post to back up the smacktalk. Now duckies been wrong before, but I've never seen it....
 

· Registered
Joined
·
3,013 Posts
@OmegaNemesis: You're ATI! Don't force us to continue it to the infinite


@Vtech1: CUDA has been doing so well. It's just fine, as good as it would be. And allows to GPU folding. Basically, it keeps what al progamation lenguage should do.

Now, CUDA is a sham. And your point is?...

Also, usapatriot hasn't done anything too far from annoying unfounded sentences.
 

· Premium Member
Joined
·
6,228 Posts
Quote:


Originally Posted by DuckieHo
View Post

Using CUDA probably would have been easier and faster to develop.... and much much much more powerful.

48 ATI shaders at 500MHz... HA HA...

48 x 500MHz x 34,000,000 = 816,000,000,000 floating point operations per second

Developing a GPU client for distribution over Xbox Live would be cheaper and more productive than buying a thousand, 216 core nVidia GPUs.
 

· Premium Member
Joined
·
67,312 Posts
Quote:


Originally Posted by BizzareRide
View Post

48 x 500MHz x 34,000,000 = 816,000,000,000 floating point operations per second

Developing a GPU client for distribution over Xbox Live would be cheaper and more productive than buying a thousand, 216 core nVidia GPUs.

Where does it say this is distributed system?
 

· Registered
Joined
·
108 Posts
OK, why does everyone always assume that he could have even made use of the extra power of a nVidia card?

I have two options: one system that costs $200 (a) and one that is 40x as powerful but costs $1000(b).

The job will take 20 minutes on a.

Why would I buy b?
Why would I rent b?

This is what this article is saying. If you don't need the power, there is a cheaper way.
 

· Premium Member
Joined
·
9,684 Posts
Quote:


Originally Posted by r4zr
View Post

OK, why does everyone always assume that he could have even made use of the extra power of a nVidia card?

I have two options: one system that costs $200 (a) and one that is 40x as powerful but costs $1000(b).

The job will take 20 minutes on a.

Why would I buy b?
Why would I rent b?

This is what this article is saying. If you don't need the power, there is a cheaper way.

There is an easier way, and it doesn't involve spending hundreds of thousands of dollars t write on an obsolete platform. Heck $40 and you can get a card for his workstation that will beat the heck out of that Xbox.

Quote:


Originally Posted by BizzareRide
View Post

48 x 500MHz x 34,000,000 = 816,000,000,000 floating point operations per second

Developing a GPU client for distribution over Xbox Live would be cheaper and more productive than buying a thousand, 216 core nVidia GPUs.

Where is that 34,000,000 coming from?
 

· Premium Member
Joined
·
67,312 Posts
Quote:


Originally Posted by r4zr
View Post

OK, why does everyone always assume that he could have even made use of the extra power of a nVidia card?

I have two options: one system that costs $200 (a) and one that is 40x as powerful but costs $1000(b).

The job will take 20 minutes on a.

Why would I buy b?
Why would I rent b?

This is what this article is saying. If you don't need the power, there is a cheaper way.

A $50USD 9600GT is more 2-4 times more powerful than a $200 XBox GPU.

cheaper + more powerful = win.
 

· Registered
Joined
·
108 Posts
Quote:

Originally Posted by DuckieHo View Post
A $50USD 9600GT is more 2-4 times more powerful than a $200 XBox GPU.

cheaper + more powerful = win.
Sort of. Depends on what you have. And this is what HE said. This paper was all about GPGPU computing! He even mentions CUDA. The point of the paper was that there are tons of GPUs to be had for very cheap, and you could use whatever was cost effective for you.

Quote:
However, if one can restate your problem in a form usable by the GPU you can still exploit this power. In fact Graphical Technology firm nVIDIA have recently released a framework for carrying out such general calculations on its hardware, with it's Compute Unified Device Architecture.
Before making claims about this, its helpful to read the whole paper as published. I have. Here is the link, full text in .PDF.

http://research.microsoft.com/pubs/79271/turing.pdf
 

· Registered
Joined
·
1,676 Posts
Quote:


Originally Posted by BizzareRide
View Post

48 x 500MHz x 34,000,000 = 816,000,000,000 floating point operations per second

Developing a GPU client for distribution over Xbox Live would be cheaper and more productive than buying a thousand, 216 core nVidia GPUs.

Your logic doesn't work, you can't measure FLOPS without an actual calculation, linpack for example. Plus, I doubt that single chip is getting 800 gigaflops.
 

· Premium Member
Joined
·
67,312 Posts
Quote:


Originally Posted by r4zr
View Post

Sort of. Depends on what you have. And this is what HE said. This paper was all about GPGPU computing! He even mentions CUDA. The point of the paper was that there are tons of GPUs to be had for very cheap, and you could use whatever was cost effective for you.

Before making claims about this, its helpful to read the whole paper as published. I have. Here is the link, full text in .PDF.

http://research.microsoft.com/pubs/79271/turing.pdf

Thanks, I just did a quick read-over... the research was done over a year ago... probably started 2-3 years ago. That means CUDA and GPGPU APIs were just being to become avaliable. Therefore at the time, his prior experience with XBox and lack of other tools were the reasons why he choose that platform.
 

· Registered
Joined
·
108 Posts
Quote:


Originally Posted by DuckieHo
View Post

Thanks, I just did a quick read-over... the research was done over a year ago... probably started 2-3 years ago. That means CUDA and GPGPU APIs were just being to become avaliable. Therefore at the time, his prior experience with XBox and lack of other tools were the reasons why he choose that platform.

Yup. Really its kind of cool to read this because its one of the early formal papers of research done with a GPGPU architecture. While not the start, definitely one of the pioneers of its practical use.
 
41 - 60 of 61 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top