Not sure if this should be in the hardware section. Found it interesting.
Quote:
Antwerp (Belgium) - Recent advances in general purpose GPU computing are beginning to shift perceptions in supercomputing applications. Belgian researchers have assembled a relatively simple enthusiast PC with an emphasis on graphics processing capability, which beats a multi-million dollar supercomputer in its target application.
Originally Posted by coltsrock
DANG! Im also surprised that the $5K PC only gets 12,603 in 3dmark06 with 4x 9800GX2s and a Phenom?
LOL--well, to be fair, the gfx cards are not in SLI since they're on a crossfire motherboard (also, I don't know if oct. SLI is even a glimmer in someone's eye at this point...).
Originally Posted by thedarklordjay
damn i want that PC in my house, 4 x 9800GX2, how you think it runs crysis?
Probably ****ty...LOL
On the other hand, I wonder how difficult it would be to get something like video encoding to properly work on a graphics card. I'd go for a multi-GPU setup just for that sake...
what application is that? If it's a game or 3dmark06, then yes the gaming PC should definitely win. If it's something like CnC or whatever a graphics-emphasized workstation is for, the gaming rig will lose flat out. That's the reason we have workstation video cards in the first place.
reading now to see what application they're talking about.
However, it is the CUDA application where this PC really shines. Compared to the 512-processor, $4.6 CalcUA supercomputer purchased in 2005, the PC can be more than a match: The projection of image slices took 23.4 seconds on the supercomputer and 35.1 seconds on the PC. The reconstruction of the slices was displayed after 67.4 seconds on the supercomputer systems and after just 52.2 seconds on the PC. The Vision Lab crew now believes that a real-time construction is possible through GPUs and is now building a cluster of such systems.
wow, 512-processor computer FTL!
I think its the graphics processors doing the work though...
that would be f****** fast if it were in sli! it would be simmiler proformance though to 1 gx2 because the other cards are doing barley anything. there would be minimal preformance difference
$5,000,000 Super computer :" *cry * *sniff*......."
$5,000 PC " Whats the matter Mr super computer??"
$5,000,00 Super computer : " I...I....im only 2 years old......and already im outdated!!!.....I get beaten by a measely $5,000 computer!! how you think that makes me feel??? "
$5,000 PC : " well , you still have more ram than I do? ,dont take it soo hard on yourself , all computers get outdated eventually....."
$Super computer : " yes..but not after 24 months!!!........I remember the good old days...when they put me together , i was marveled at , I could do a super PI score of 5 seconds!!! i was the king at that!!! I could run Half life 2 on maximum settings withought blinking an eye!, I was the most powerfull computer in the world!!!! now i cant even run Crysis!!
*sniff*...now......now iv....iv have been dethroned by you!!!.....*sniff*...i think im catching a cold...or a virus!! *cry* , guess my end is near..........goodbye cruel world!!"
Their 3dmark06 score sucks. Bad scaling and a phenom?
QX9770 ftw.
3dmark06 score is useless showing off on a comp like this, the CPU just supervises gpus while they do the work. qx9770 would have just raised the cost of the comp for about 1.5k and very little performance increase.
and intel's mobos x48 only has 3 pciexpress slots the most, we lose a whole 9800GX2 and gaining qx9700 that is nothing plus the added cost of the intel mobo.
Originally Posted by Pooping^fish
Their 3dmark06 score sucks. Bad scaling and a phenom?
QX9770 ftw.
LOL@you. There is no scaling. It's not in SLI. It's simply just four graphics cards working simultaneously (especially since they point out the fact that they're using a crossfire-based motherboard). The work distribution is governed by the CPU (likely because writing code for their calculation program would be far too difficult when taking into account SLI). Using a QX would be a waste of money/processing power since likely it doesn't even need the full power of the phenom they used (simple speculation though).
I'm pretty sure they had their reasons to go with the phenom
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Related Threads
?
?
?
?
?
Ask a question
Ask a question
Overclock.net
27.8M posts
541.2K members
Since 2004
A forum community dedicated to overclocking enthusiasts and testing the limits of computing. Come join the discussion about computing, builds, collections, displays, models, styles, scales, specifications, reviews, accessories, classifieds, and more!