The US government is planning to spend hundreds of millions of dollars over the next several years to develop huge supercomputers with power beyond anything available today. The aim is to address the most challenging problems facing science, as well national security and industry.
Once completed, these systems will be capable of sustained petascale computing speeds, which are equal to quadrillions of calculations per second. To understand the scale of these planned systems, the leading machines on the current Top500 Supercomputer List are capable of reaching the range of only multiple TFLOPS (trillion floating-point operations per second). The latest Top500 list, updated twice a year, is due out tomorrow.
But PFLOPS (or "petaflop") systems are coming. Earlier this month, Seattle-based Cray Inc. said it had signed a contract worth US$200 million to deliver a PFLOPS-capable system to the U.S. Department of Energy's (DOE) Oak Ridge National Laboratory. That system, based on Advanced Micro Devices processors, will be built in phases of ever-increasing speeds, and is due to be completed in 2008.
The National Science Foundation (NSF) this month began seeking proposals for a supercomputer that could cost as much as US$200 million. And in July, the Defense Advanced Research Projects Agency (DARPA), which was responsible for creating the Internet, will award two supercomputer development projects expected to cost several hundred million dollars.
The scale of the computing power on its way will be so enormous that "we have to change the way we do computational science to really take advantage of these machines," said Dimitri Kusnezov, head of the DOE's advanced simulation and computing program, which operates the world's most powerful supercomputer, the IBM BlueGene/L. That supercomputer, with more than 131,000 IBM Power processors, was the No. 1 system on the Top 500 list when those rankings were last updated in November.
This DOE BlueGene system broke a record this month when it ran scientific code, called Qbox, at a sustained level of 207 TFLOPS. While the system benchmarks higher on test codes, achieving high levels of performance with a real-world application is a more difficult task because of complexity and size of the code, according to those involved with the project.
But Kusnezov said that when he considers the performance of future systems, including an IBM system built of 250,000 processors, their capabilities will challenge scientists.
"The question is what they would do with an infinite amount of computing speed," said Kusnezov, referring to scientists. "What would they calculate? And I'll wager that they don't have an answer for you. Because people think about their problems within the constraints of what they think they can calculate, and once you remove that constraint, people are lost."
Kusnezov said petascale computing levels will force researchers to assemble multidisciplinary teams that can task these systems with solving fundamental scientific problems. "We have to change the way we do computational science to really take advantage of these machines," he said.
DARPA has been running a multiyear program to build the next generation of computer systems, and next month, it is expected to pick two of the three vendors it has been working with -- Sun Microsystems, IBM and Cray -- for the next phase of the project. DARPA's goal is to build an "economically viable" petascale supercomputer, according to a Jan Walker, a spokesperson for the agency. DARPA is planning on four and a half years for development.
The NSF also wants a petascale system and is seeking proposals -- due next February -- for a supercomputer that can answer fundamental questions about what kinds of abrupt transitions can occur in Earth's climate and ecosystem structure. It wants the system ready by 2011.
Stephen Meacham, IT research program director at the NSF, said the intent is to attack "frontier problems" in science, such as modeling the interaction of viruses with various components in a cell and looking for ways to block those interactions.
Although much of the focus on supercomputers is on the number of processors being strung together, more vexing problems involve their memory and storage subsystems, which "begin to take up a good chunk of the overall cost of the system," said Dave Turek, vice president of deep computing at IBM.
Energy consumption is another issue for systems that can consume power by the megawatt. Turek said the next-generation BlueGene system, BlueGene/P, will be targeted at delivering petaflop performance. But it will only require a 10 percent to 15 percent increase in power. IBM isn't disclosing other details, or a time frame, for this system.
Turek said that the most important goal is building low-cost, high-performance systems that a businesses can use. Indeed, there are efforts to encourage businesses to incorporate supercomputing systems as part of their IT infrastructure for use in product design.
To help further such use, legislation was introduced in the US Senate this month to set aside US$25 million to fund up to five supercomputing centers for assisting businesses and manufacturers. The bill is sponsored by Sen. Mike DeWine (R-Ohio) and Sen. Herb Kohl (D-Wisc.).
The legislation, called the Blue Collar Computing and Business Assistance Act of 2006, gets its name in part from the Ohio Supercomputing Center (OSC) in Columbus, whose Blue Collar Computing initiative is intended to promote the mainstreaming of supercomputing in IT.
Stan Ahalt, the OSC's executive director, said one of the things these centers could do is help adapt serial code to parallel systems, as well as take some of the code used in national labs and make them "so they can be used for industrial purposes."
Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.