Originally Posted by Hyolyn
Well, i wouldn't say decimate, it really depends on the application usage but if rendering is the topic here then yes of course he will benefit from more threads but if he expects any real noticeable difference it's at it's best just around 1-2 minutes faster, give or take.
ARGH! This is nothing against you since most people just don't know, but I really really wish reviewers would do a better job of doing real world render tests and SHOWING people just how damn good 6 core+ chips are. 1-2 minutes is a fairly significant difference when it comes to rendering. As an example, when I moved from an e8400 to a i7 930 my test renders for a project went from a little over a minute to a hair over 30 seconds. I was lighting a scene and would constantly have to re-render since preview lighting is a joke in Maya. And I could do this anywhere from 10-50 times in an hour. If you're rendering out in a typical production workflow you could potentially save anywhere from 10-30 minutes/hour JUST doing small renders. That's significant.
Extrapolating out, if you are doing a full 1080p render of a scene it can take absurd amounts of time. I've had 5-10 second 720p animation take almost 24 hours to render out of Maya before on an OC'd 930. If the average performance increase in multithreaded apps and rendering going from 4-6 cores is around the 50% mark, that render time is now down to around 12 hours. That's a HUGE difference.
For the OP, if all you're doing is rendering once in a blue moon, then no, the X79 platform is probably not for you. If you have render heavy workflows or use software that can actually take advantage of their multithreaded performance then they pay for themselves in time saved VERY quickly. I'm also a bit biased though since I'm a 3D artist and for me time = productivity and money. The less time I have to spend waiting on checkerboards to render the more time I have to actually work on a project and the less likely I am to miss a deadline.
I really wish reviewers would do a better job of testing these chips in the applications they're being designed for and that the majority of people buying them use them for. It gets really annoying seeing 2, maaaaaybe 3 synthetic 3D/rendering benchmarks that are run for at most 2-3 minutes and then 2 pages of testing games at every setting under the sun. All to find that no, these chips do not perform significantly better in all but 1 or 2 games. Surprise, surprise. Maybe it's just because reviewers don't know the software, but seeing packages and renderers that people actually use tested with different settings would be amazingly helpful and go a long way to clearing up confusion over just what applications these chips excel at.