Originally Posted by gonX
Uh... Do you know anything about MFLOPS? GFXes has about 2 TeraFlops, where I'm not fully sure where CPU is, but it's way under.
FLOPS (or flops) is an abbreviation of Floating Point Operations Per Second. This is used as a measure of a computer's performance, especially in fields of scientific calculations that make heavy use of floating point calculations. (Compare to MIPS -- million instructions per second.) One should speak in the singular of a FLOPS and not of a FLOP, although the latter is frequently encountered. The final S stands for second and does not indicate a plural.
Alternatively, the singular FLOP (or flop) is used as an abbreviation for "floating-point operation", and a flop count is a count of these operations (e.g., required by a given algorithm or computer program). In this context, "flops" is simply the plural rather than a rate.
Computing devices exhibit an enormous range of performance levels in floating-point applications, so it makes sense to introduce larger units than the FLOPS. The standard SI prefixes can be used for this purpose, resulting in such units as megaFLOPS (MFLOPS, 106 FLOPS), gigaFLOPS (GFLOPS, 109 FLOPS), teraFLOPS (TFLOPS, 1012 FLOPS), petaFLOPS (PFLOPS, 1015 FLOPS) and exaFLOPS (EFLOPS, 1018 FLOPS).
A relatively cheap but modern desktop computer using, for example, a Pentium 4 or Athlon 64 CPU, typically runs at a clock frequency in excess of 2 GHz and provides computational performance in the range of a few GFLOPS. Even some video game consoles of the late 1990s and early 2000s, such as the Nintendo GameCube and Sega Dreamcast, had performance in excess of one GFLOPS (but see below).
The original supercomputer, the Cray-1, was set up at Los Alamos National Laboratory in 1976. The Cray-1 was capable of 80 MFLOPS (or, according to another source, 138â€“250 MFLOPS). In fewer than 30 years since then, the computational speed of supercomputers has jumped a millionfold.
According to the TOP500 list, the fastest computer in the world as of June 2006 was the IBM Blue Gene/L supercomputer, measuring a peak of 207.3 TFLOPS. This was more than twice the previous Blue Gene/L record of 136.8 TFLOPS, set when only half the machine was installed. Blue Gene (unveiled October 27th, 2005) contains 131,072 processor cores, yet each of these cores are quite similar to those found in many mid-performance computers (PowerPC 440). Blue Gene/L is a joint project of the Lawrence Livermore National Laboratory and IBM Article.
Cray inc. has announced that it will be updating the OAK Ridge Super Computer. The computer will be capable of a petaflop and is being advertised as 3 times more powerful then any other computer in the world. The upgrades will be completed by 2007 and the Super Computer is costing $200 million. Read More
In June of 2006, a new computer was announced by Japanese research institute RIKEN, the MDGRAPE-3. The computer's performance tops out at one petaflop, over three times faster than the Blue Gene/L. MDGRAPE-3 is not a general purpose computer, which is why it does not appear in the TOP500 list. It has special-purpose pipelines for simulating molecular dynamics. MDGRAPE-3 houses 4,808 custom processors, 64 servers each with 256 dual-core processors, and 37 servers each containing 74 processors, for a total of 40,314 processor cores, compared to the 131,072 needed for the Blue Gene/L. MDGRAPE-3 is able to do many more computations with few chips because of its specialized architecture. The computer is a joint project between Riken, Hitachi, Intel, and NEC subsidiary SGI Japan.
Distributed computing uses the Internet to link personal computers to achieve a similar effect: Folding@home, the most powerful distributed computing project, has been able to sustain over 150 TFLOPS. SETI@home computes data at more than 100 TFLOPS. As of June 2005, GIMPS is sustaining 17 TFLOPS, while Einstein@home is actually crunching more than 50 TFLOPS against 167 TFLOPS of its theoretical computing speed.
Pocket calculators are at the other end of the performance spectrum. Each calculation request to a typical calculator requires only a single operation, so there is rarely any need for its response time to exceed that needed by the operator. Any response time below 0.1 second is experienced as instantaneous by a human operator, so a simple calculator could be said to operate at about 10 FLOPS.
Humans are even worse floating-point processors on the mathematical level. If it takes a person a quarter of an hour to carry out a pencil-and-paper long division problem with 10 significant digits, that person would be calculating in the milliFLOPS range. Bear in mind, however, that a purely mathematical test may not truly measure a human's FLOPS, as a human is also processing smells, sounds, touch, sight and motor coordination.
FLOPS as a measure of performance
In order for FLOPS to be useful as a measure of floating-point performance, a standard benchmark must be available on all computers of interest. One example is the LINPACK benchmark.
FLOPS in isolation are arguably not very useful as a benchmark for modern computers. There are many factors in computer performance other than raw floating-point computation speed, such as I/O performance, interprocessor communication, cache coherence, and the memory hierarchy. This means that supercomputers are in general only capable of a small fraction of their "theoretical peak" FLOPS throughput (obtained by adding together the theoretical peak FLOPS performance of every element of the system). Even when operating on large highly parallel problems, their performance will be bursty, mostly due to the residual effects of Amdahl's law. Real benchmarks therefore measure both peak actual FLOPS performance as well as sustained FLOPS performance.
For ordinary (non-scientific) applications, integer operations (measured in MIPS) are far more common. Measuring floating point operation speed, therefore, does not predict accurately how the processor will perform on just any problem. However, for many scientific jobs such as analysis of data, a FLOPS rating is effective.
Historically, the earliest reliably documented serious use of the Floating Point Operation as metric appears to be AEC justification to Congress for purchasing a Control Data CDC 6600 in the mid-1960s.
The terminology is currently so confusing that until April 24, 2006 U.S. export control was based upon measurement of "Composite Theoretical Performance" (CTP) in millions of "Theoretical Operations Per Second" or MTOPS. On that date, however, the U.S. Department of Commerce's Bureau of Industry and Security amended the Export Administration Regulations to base controls on Adjusted Peak Performance (APP) in Weighted TeraFLOPS (WT).
FLOPS, GPUs, and game consoles
Very high FLOPS figures are often quoted for inexpensive computer video cards and game consoles.
For example, the Xbox 360 has been announced as having CPU floating point performance of around one TFLOPS, while the PS3 has been announced as having a theoretical 2.18 TFLOPS. By comparison, a high-end general-purpose PC would have a FLOPS rating of around ten GFLOPS, if the performance of its CPU alone was considered. The 1 TFLOPS for the Xbox 360 or 2 TFLOPS for the Playstation 3 ratings that were sometimes mentioned regarding the consoles would even appear to class them as supercomputers. These FLOPS figures should be treated with caution, as they are often the product of marketing. The game console figures are often based on total system performance (CPU + GPU). In the extreme case, the TFLOPS figure is primarily derived from the function of the single-purpose texture filtering unit of the GPU. This piece of logic is tasked with doing a weighted average of sometimes hundreds of pixels in a texture during a look-up (particularly when performing a quadrilinear anisotropically filtered fetch from a 3D texture). However, single-purpose hardware can never be included in an honest FLOPS figure.
Still, the programmable pixel pipelines of modern GPUs are capable of a theoretic peak performance that is an order of a magnitude higher than a CPU. An NVIDIA 7800 GTX 512 is capable of around 200 GFLOPS. ATI's latest X1900 architecture (2/06) has a claimed performance of 554 GFLOPS. This is possible because 3D graphics operations are a classic example of a highly parallelizable problem which can easily be split between different execution units and pipelines, allowing a high speed gain to be obtained from scaling the number of logic gates while taking advantage of the fact that the cost-efficiency sweet spot of (number of transistors)*frequency lies at around 500 MHz. This has to do with the imperfection rate in the manufacturing process, which rises exponentially with frequency.
While CPUs dedicate a few transistors to run at very high frequency in order to process a single thread of execution very quickly, GPUs pack a great deal more transistors running at a low speed because they are designed to simultaneously process a large number of pixels with no requirement that each pixel be completed quickly. Moreover, GPUs are not designed to perform branch operations (IF statements which determine what will be executed based on the value of a piece of data) well. The circuits for this, in particular the circuits for predicting how a program will branch to ready data for it, consume an inordinant number of transistors on a CPU that could be used for FLOPs. Lastly, CPUs access data more unpredictably. This requires them to include an amount of on-chip memory called a cache for quick random access. This cache represents the majority of CPU transistors.
General purpose computing on GPUs is an emerging field which hopes to utilize the vast advantage in raw FLOPS, as well as memory bandwidth, of modern video cards. As an example, occlusion testing in games is often done by rasterizing a piece of geometry and detecting the number of pixels changed in the z buffer, a highly non-optimal technique considering floating point operations. A few applications can even take advantage of the texture fetch unit in computing averages in (1, 2, or 3 dimensional) sorted data for a further boost in performance.
In January 2006, ATI Technologies launched a graphics sub-system that put in excess of 1 TERAFLOPS within the reach of most home users. To give this achievement perspective, you need to consider that less than 9 years earlier, the US Department of Energy commissioned the world's first TERAFLOPS super computer, ASCI Red, consisting of more than 9,200 Pentium II chips. The original incarnation of this machine used Intel Pentium Pro processors, each clocked at 200 MHz. These were later upgraded to Pentium II OverDrive processors. http://en.wikipedia.org/wiki/FLOPS