Call me a hater, a lamer, what ever you will, but I really do think that the new Intel Core i7 processors are but a very expensive (and very temporary) fix for all the hardware junkies of this world. In my opinion Intel could and should have milked some more greens out of the Core 2 family, for three simple reasons:
- They still kick the crap out of AMD performance-wise.
- Wether you're selling cars, toys, wood pulp, high performance dish-washing machines or processors, it never really is a very good idea to launch a product while your biggest market is in an economical crisis.
- It's easy money!!!111!122131!!
And between you and me, does the increase in performance justify the performance? Let's weigh the arguments:
- Hyper-Threading, according to what has been seen in the late Pentium 4s, and what ARM has to say about processors, is a total waste of time, silicon, and electrons. It has been said inefficient, and performance gains only exist on very specific operations (correct me if I'm wrong). In my head, it pretty much equates to useless.
- QPI, I must admit, is a great step ahead in front of traditional FSB based systems. Faster connections between cores and uncores obviously means better latencies, greater overall performance, but I am still skeptical on whether higher FSBs could have done the job.
- Triple Channel DDR3 too is great, but then again, could very well have been done with traditional FSB based motherboards too.
- Turbo-Boost in itself is a good effort to make processors more energy efficient, but putting this technology on 130W TDP processors is kind of funny. 130W/4 or 130W/1 or 130W/2 is equal to the same wattage, so I don't see where the power savings/performance gains are. That, and just as parallel processing is getting bigger and bigger, and as more and more apps support it, Intel is telling us that you can now run single-threaded apps faster by "shutting down" a couple of cores and diverting power to a single core, as long as all the electrical specs are respected. Why the fight for support of multi-core processors in more apps then? Can you say conflict of interest?
- Last but not least, SSE4.2 also seems totally useless to my eyes, considering the efforts that ATI and Nvidia are deploying to create more efficient ways of accelerating calculations that these new SIMD instructions are trying to compensate for. And if I am wrong and if it were really that useful, implementation in a C2Q would have probably been trivial.
I can hear you all preparing your excuses for your premature upgrades: "But Max, a bigger graphics card doesn't decrease my render time in Cinebench." My question to you: how many times per week does the average power-user render some 3D? As a matter of fact, how many times a week does a power-user max out his CPU usage, if you excluding benchmark runs? Probably less times than you would think.
i7 is what I like to call a post-beta pre-release product. It's a great introduction for the newer socket and technologies with plenty of potential, but there's a reason it's only available in SUPER EXTREME OVERCLOCKER ***BBQ+ edition: it's a quick dose of silicone dope for hardware junkies, but it's not yet ready for the general public. It's just irresponsible mass market a 45nm processor with a 130 watt TDP, when you can get a Kentsfield processor with only 90W of thermal design that gives you about 80% of the real-world performance that i7 offers.
I am confident that the months to come will be filled with quality updates for Intel's new platform, but for now, the only thing migrating to i7 will do is increase you're e-peens size by a couple of nanometers.