Originally Posted by wdgann
everything is math in computer...your answer..I cannot get it.
First, that's actually a misconception. Processors do much much more than just math calculations. Take a look at the x86 Instruction Set
. Yeah, there's obviously a lot of math functions. But there's more of everything else.
Second, there's math and then there's math. There are many different types of math, and they are not all equal.
I'm not an expert in this stuff, so don't take what I say as gospel. But here it is to the best of my understanding:
Floating point operations are probably the most difficult thing a cpu ever does. It takes a lot of dedicated transistors on the chip to be able to do those accurately and quickly. That's why cpu's aren't good at those, because there just wouldn't be room on the chip for everything else they need to do. That's why in the 386 and 486 days you could by Math Co-Processors, that offloaded those functions off the cpu and onto a dedicated chip just for that purpose.
Later on, in the Pentium era, another solution was introduced. We started to see a lot of SIMD and fixed-function instructions added to the chips (things like MMX and SSE). These are great because they can improve performance substantially, without requiring as much dedicated resources on the chip as a pure math unit. The problem with those instructions is that they are of limited use. They work great in the situations they are designed for, but the rest of the time they don't help at all. And they generally can't be used for pure, raw floating point calculations.
Starting sometime in the early 90's, 3D video games began to gain popularity. Unlike 2D games, which can run almost entirely on integers. 3D games are heavily dependent on Floating Point numbers. So unsurprisingly, the poor floating point performance of the cpu's at the time became obvious to everyone. Enter the hardware accelerated 3D graphics card. Early 3D cards actually consisted primarily of fixed function instructions. Over time, things evolved, and processes were refined and perfected, and the 3D cards of today consist primarily of massively parallel floating point units with very few fixed functions. How they work now is actually very similar to the old Math Co-Processors from the 80's and 90's. But they are obviously many orders of magnitude larger and more powerful.
That brings us to today. The existence of these massively parallel floating point units, opens up a whole new world of processing power for the average PC owner that never existed before. GPU's can be used for gaming, video encoding, password cracking, for stock market calculations, SETI@Home, Folding@Home, bitcoin mining, and much more.
Just about every task that can be performed on the GPU has its industry revolutionized overnight. Once any given task become possible on a GPU, it becomes a waste of time to try to the same thing on a CPU due to the extreme difference in performance. Take computer security and encryption for example. These days, any password less than 9 characters that doesn't contain at least 1 capital, 1 number, and 1 other character is not considered secure. The reason is because a couple of high end graphics cards can crack anything less than that in a matter of hours. While doing the same thing on a CPU would take years. That advance in power to the average user has force big security companies to up their game, they have no choice really.
Pretty much everything that happens in the computer industry is dictated by the amount of processing power available to the users.Edited by wedge - 6/24/13 at 8:05am