A supercomputer is a computer that leads the world in terms of processing capacity, particularly speed of calculation, at the time of its introduction. (The term "Super Computing" was first used by New York World newspaper in 1920 to refer to the large custom built tabulators IBM had made for Columbia University.)
Supercomputers introduced in the 1960s were designed primarily by Seymour Cray at Control Data Corporation (CDC), and led the market into the 1970s until Cray left to form his own company, Cray Research. He then took over the supercomputer market with his new designs, holding the top spot in supercomputing for 5 years (1985–1990). In the 1980s a large number of smaller competitors entered the market, in a parallel to the creation of the minicomputer market a decade earlier, but many of these disappeared in the mid-1990s "supercomputer market crash". Today, supercomputers are typically one-of-a-kind custom designs produced by "traditional" companies such as IBM and HP, who had purchased many of the 1980s companies to gain their experience, although Cray Inc. still specializes in building supercomputers.
The term supercomputer itself is rather fluid, and today's supercomputer tends to become tomorrow's also-ran. CDC's early machines were simply very fast scalar processors, some ten times the speed of the fastest machines offered by other companies. In the 1970s most supercomputers were dedicated to running a vector processor, and many of the newer players developed their own such processors at a lower price to enter the market. The early and mid-1980s saw machines with a modest number of vector processors working in parallel became the standard. Typical numbers of processors were in the range 4–16. In the later 1980s and 1990s, attention turned from vector processors to massive parallel processing systems with thousands of "ordinary" CPUs; some being off the shelf units and others being custom designs. Today, parallel designs are based on "off the shelf" RISC microprocessors, such as the PowerPC or PA-RISC, and most modern supercomputers are now highly-tuned computer clusters using commodity processors combined with custom interconnects.
Current fastest supercomputer system
The IBM Blue Gene/L is the fastest supercomputer in the world.On March 25, 2005, IBM's Blue Gene/L prototype became the fastest supercomputer in a single installation using its 65536 processors to run at 135.5 TFLOPS (1012 FLOPS). The Blue Gene/L prototype is a customized version of IBM's PowerPC architecture. The prototype was developed at IBM's Rochester, Minnesota facility, but production versions were rolled out to various sites, including Lawrence Livermore National Laboratory (LLNL). On October 28, 2005 the machine reached 280.6 TFLOPS with 131072 processors, but the LLNL system is expected to achieve at least 360 TFLOPS, and a future update will take it to 0.5 PFLOPS. Before this, a Blue Gene/L fitted with 32,768 processors managed seven hours of sustained calculating at a 70.7 teraflops—another first. [1] In November of 2005 IBM Blue Gene/L became the number 1 on TOP500's most powerful supercomputer list. [2]
References: http://news.com.com/IBM+set+to+take+supercomputing+crown/2100-1010_3-5439523.html?tag=nl http://news.com.com/Blue+GeneL+tops+own+supercomputing+record/2100-1008_3-5632045.html?tag=nl http://news.com.com/Blue+GeneL+tops+its+own+supercomputer+record/2100-1006_3-5918025.html
[edit]
Previous fastest supercomputer system
Prior to Blue Gene/L, the fastest supercomputer was the NEC Earth Simulator at the Yokohama Institute for Earth Sciences, Japan. It is a cluster of 640 custom-designed 8-way vector processor computers based on the NEC SX-6 architecture (a total of 5,120 processors). It uses a customised version of the UNIX operating system.
At the time of introduction, the Earth Simulator's performance was over five times that of the previous fastest supercomputer, the cluster computer ASCI White at Lawrence Livermore National Laboratory. The Earth Simulator held the #1 position for 2½ years. Because it was largely unanticipated by the top performers at the time, its introduction spawned the term "computnik," in a reference to the Soviet Union's upstaging of the Western space program with the 1957 launch of Sputnik.
A list of the 500 fastest supercomputer installations, the TOP500, is maintained at http://www.top500.org/ .
[edit]
Quasi-supercomputing
Some types of large-scale distributed computing for embarrassingly parallel problems take the clustered supercomputing concept to an extreme. One such example, is the BOINC platform which is a host for a number of distributed computing projects recorded on April 17th 2006 processing power of over 418.6 TFLOPS thru 1 Million plus computers on the network [3]. On April 17th 2006 BOINC's largest project SETI@home has a reported processing power of 250.1 TFLOPS thru 900,000+ computers [4].
On May 16, 2005, the distributed computing project Folding@home reported a processing power of 195 TFLOPS on their CPU statistics page.[5]. Still higher powers have occasionally been recorded: on February 2, 2005, 207 TFLOPS were noted as coming from Windows, Mac, and Linux clients [6].
GIMPS distributed Mersenne Prime search achieves currently 20 TFLOPS.
Google's search engine system may be faster with estimated total processing power of between 126 and 316 TFLOPS. Tristan Louis estimates the systems to be composed of between 32,000 and 79,000 dual 2 GHz Xeon machines. [7] Since it would be logistically difficult to cool so many servers at one site, Google's system would presumably be another form of distributed computing project: grid computing.