Overclock.net - An Overclocking Community

Overclock.net - An Overclocking Community (https://www.overclock.net/forum/)
-   NVIDIA (https://www.overclock.net/forum/69-nvidia/)
-   -   Thinking about buying a GTX 670 for the CUDA cores for 3D Rendering, anyone know much about CUDA? (https://www.overclock.net/forum/69-nvidia/1265775-thinking-about-buying-gtx-670-cuda-cores-3d-rendering-anyone-know-much-about-cuda.html)

Shadow11377 06-04-2012 01:24 PM

GTX 670's have 1344 CUDA Cores while my 550 Ti has 192. That's 7 times the amount.

I'd like to know if 7x the CUDA cores would get me 7x the performance rendering scenes with Blender, and if it would be faster per CUDA Core because the 670 is a better card or not.

I game a lot on this computer and I've been doing fine with the 550 Ti but I hooked up my 1050p TV and by that I know I'm going to need a new card for when I get a 1080p monitor. At the moment I'm playing around with Blender and the Render times are pretty long, if I want them to come out nice, I'm using the Cycles Renderer which uses the GPU to render with, and I want to know if upgrading to a card with 7x the CUDA cores will give me around 7x the performance for stuff that uses the CUDA cores.

I'm aware that most games won't benefit from more cores, so I do not believe I will get anywhere near 7x the FPS in games and I do not expect that. I do however expect that games with heavy PhysX will benefit from it though, but there aren't many of those out yet.

McMogg 06-04-2012 02:36 PM

Nope.
Firstly, the CUDA cores in the 6xx range aren't the same as the CUDA cores in the 5xx range, also the 'Compute' ability of the GPU is a little bit crippled in Kepler. It is better to get a GTX 580 than a 680 for 3D rendering.

xxmastermindxx 06-05-2012 12:47 AM

Like McMogg said, they're not the same CUDA cores. Two different architectures and they can't really be directly compared. Fermi had great compute performance, Kepler does not. Its slower than GTX 500 and AMD 7900 in almost every compute benchmark, other than compute shader programs I believe.

Shadow11377 06-05-2012 06:58 PM

That's a surprise!
Could this maybe be improved with a driver update or do you think the 500 series will always be better for CUDA stuff?

If you can link me to some of the benchmarks you've seen that'd be nice, I like to know as much as I can before making a purchase.


Edit: I did very little googling before, and I read something about the amount of cuda cores is crippled due to the low VRAM on the cards out now, if I were to get a card with more ram, would that solve it?

kz26 06-05-2012 07:00 PM

If you're serious about 3D rendering, you'll need to go and do a bit of research and tell us what apps you'll be using. In addition, you'll want to be looking at Quadro workstation-class cards for this type of work.

Shadow11377 06-05-2012 07:10 PM

I'm more serious about the gaming, I've only recently gained an interest in 3D so I'm looking to get good performance for gaming but I'd like to have good render times as well.

I have a spare computer I can use to throw my 550 Ti into and set it up to render as a slave to help once I've fully learned it and want to make some cool stuff, but as it is right now I'm looking for a card that will get me a nice performance increase from this 550 for rendering, and to keep good FPS while gaming when I get my new monitor.


Right now I am using Blender with the Cycles renderer, using my GPU to do the work. My scenes are rendered at low res with few textures as of now and low geometry to cut down on render times, I'd like to work at a faster pace with better models and textures.

I don't see myself moving away from blender because I'm not doing any professional work, nor am I getting payed. I don't have access to a cheap copy of any of the paid software either, so I'm kind of stuck with Blender and liking it.

When I get my monitor upgrade, it will be 1080P or preferably 1920x1200 and I want good performance at that res.

McMogg 06-06-2012 02:29 AM

Well that screams 'GTX 580!' at me..

Get the 3GB version, (heck, get it second hand and cheap!) and then it will hold its own for that kinda stuff.. the 500 series still has a good computer performance, that's why folders haven't moved to 6xx yet..

Shadow11377 06-06-2012 10:18 PM

I've read a little bit about the gk104 chip, and it appears that it's 1/24 of something while the 500 series fermi is 1/8, and the next line of the 600 series based on the GK110 will have a larger memory interface and be the 1/8 as fermi is, with 33% more CUDA cores.

To me it looks like they rushed into releasing the 680 to compete with AMD and the real kepler has yet to come out.

Can someone help me find some benchmarks comparing the 550 TI and the 580 in rendering tests, not gaming? All I can seem to find is gaming comparisons and although they look nice, it's not helping me :\

McMogg 06-07-2012 01:54 AM

that:s exactly what they did, they were producing GK104 to be used in the mid range (<660) and then realised it can compete with the 7970, so released it as the GTX680, whilst they were still finishing GK110.

I will try to find you some rendering comparisons asap smile.gif

Shadow11377 06-07-2012 02:27 AM

That'd be great.

I can find benches for games no problem but gpgpu stuff is tricky to find for me, best I found was a general statement with no proof stating pretty much the following: Workstation cards>580>680>570

I'm curious to see some numbers smile.gif


By the way, do you have any clue what AMD is up to in terms of their response to the coming GK110 other than factory overclocking their current cards?

McMogg 06-07-2012 05:05 AM

Here's something right up your street, a 'cycles' benchmark on Blender:

It compares 5xx cards to 6xx cards, and as far as I can see it's more like

580>570>680>670.

and I have no idea as to ATi's plan for the GK110 response, probably the 8xxx series wink.gif

Shadow11377 06-07-2012 09:44 AM

Looks like you missed the link, still not seeing any benchmark.

McMogg 06-07-2012 11:06 AM

Hah!
typical of me!
sorry, I'm not on my computer currently (phone browsing), but when I get to it I'll find the link again and post it xD

McMogg 06-07-2012 03:07 PM

Sorry for the double post, but here's the table smile.gif

https://docs.google.com/spreadsheet/ccc?key=0As2oZAgjSqDCdElkM3l6VTdRQjhTRWhpVS1hZmV3OGc#gid=0

specific attention paid to entry 331 and 340.

Same processor (i7 2600k @ 4.5GHz, improved build of Blender, yet the 670 is 10 minutes slower at the task.)

Temile 06-07-2012 09:55 PM

I do some cuda programming. There are three big differences between the 5xx and 6xx architectures:
1) 5xx shaders run at 2x speed, but there are fewer of them...this is a good for some applications, bad for others, but the 6xx chips use less power
2) the 6xx chips have reduced double-precision arithmetic units. Games do not (for the most part) use doubles, so cutting this for gaming cards again reduces the power consumption and allows other parts to run faster
3) the 6xx has new compute capabilities (look up "cuda compute capability level") mostly around scheduling. This is definitely a good thing, but most software doesn't take advantage of it yet.

I don't know how much Blender depends on double precision, so that may or may not be an issue (if I had to guess, I'd say "not"). For future, the new compute capabilities may make a big difference...there are certainly some nice tricks I'd like to try out.

Sum: you will not get a 7x speed up, but you will probably 2-3x or more.

I'd get the 6-series, though it may take a while for drivers and other software to stabilize.

MicahMicahMicah 06-23-2012 01:59 PM

I just ran the blender benchmark from here:

http://www.blendernation.com/2012/01/24/blender-gpu-benchmark/

Resulting in a black image in 00:00.27.

Wanted to see how 3 evga 670 SC 4GB stack up to 2 evga 580 UC 3GB. The dual ultra classifieds complete the 3 monkey head render in 02:12.40.

She loved E 06-23-2012 04:13 PM

From what i understand to optimize rendering, you need a workstation card (Quadro). Even though architectually GeForce and Quadro are similar, drivers for the GeForce cards prohibit GPU compute for rendering tasks.

MicahMicahMicah 06-24-2012 12:55 AM

Nevermind, got it working. Needed to drop a file in place Kernel_sm_30.cubin in 2.63\scripts\addons\cycles\lib

Dual evga GTX 580 UC 3GB = 2min12sec
Single evga GTX 670 SC 4GB=3min30sec
Triple evga GTX 670SC 4GB=1min32sec

Stock clocks for SC 670's were not stable with 3 cards, but were fine with two, was getting red screen buzzing crashes. Stuck a 92mm Scythe fan(I had lying around) on the end of the 3 cards pushing air between them. Resulted in about -5 degrees C and no crashes since.

Shadow11377 06-24-2012 04:00 PM

Quote:
Originally Posted by She loved E View Post

From what i understand to optimize rendering, you need a workstation card (Quadro). Even though architectually GeForce and Quadro are similar, drivers for the GeForce cards prohibit GPU compute for rendering tasks.

Don't have the money for a workstation card, not going for optimal rendering specs, just the best available to me.


All times are GMT -7. The time now is 07:52 AM.

Powered by vBulletin® Copyright ©2000 - 2019, Jelsoft Enterprises Ltd.

User Alert System provided by Advanced User Tagging (Pro) - vBulletin Mods & Addons Copyright © 2019 DragonByte Technologies Ltd.
vBulletin Security provided by vBSecurity (Pro) - vBulletin Mods & Addons Copyright © 2019 DragonByte Technologies Ltd.

vBulletin Optimisation provided by vB Optimise (Pro) - vBulletin Mods & Addons Copyright © 2019 DragonByte Technologies Ltd.