Menu
Menu
Supercomputers with 100 million cores coming by 2018

Supercomputers with 100 million cores coming by 2018

The push is on to build exascale systems that can solve the planet's biggest problems

IBM is in competition with Cray and other supercomputing makers, and finding a way to cut power demands for users is among the top problems. But the vendors still have to decide how to build these systems. Increasingly, they're likely to use hybrid approaches that combine co-processors or accelerators with CPUs in an effort to cut power.

The Roadrunner, which uses 3.9 megawatts, achieved just over one petaflop when it was announced. It uses a hybrid architecture that mixes AMD processors with Cell processors that include nine separate processor cores, including one PowerPC core and eight smaller co-processing units called synergistic processing elements. The use of co-processors, which includes graphics processing units, and field-programmable gate arrays, are intended to help cut power demand by moving some of the work off CPUs to processors that handle more specialized work.

Estimates on the size of exascale systems range from 10 million to 100 million cores. Turek believes the latter number is more likely.

"We think exascale is a 100 million-core kind of enterprise, and there doesn't seem any real pathway around it, said Turek. "Where the players in pursuit of exascale are today is [at] a state of investigation to see what the right model is. So if hybridization is the key, then what is the ratio of special-purpose cores to conventional cores?" he said.

These future systems will have to use less memory per core and will need more memory bandwidth. Systems running 100 million cores will continually see core failures and the tools for dealing with them will have to be rethought "in a dramatic kind of way," said Turek.

IBM's design goal for an exascale system is to limit it to 20 megawatts of power and keep it at a size of between 70 and 80 racks. Jaguar is entirely built of CPUs, but Bland also sees future systems as hybrids, and points to chip development by both Intel and AMD that combine CPUs and co-processors.

"We believe that using accelerators is going to be absolutely critical to any strategy to getting to exaflop computers," he said.

Addison Snell, CEO of InterSect Research, an HPC research firm in Sunnyvale, Calif., said accelerators are capable of providing vast computational capability for specific applications, and the applications that can take advantage of them can move toward exascale first." Eventually, a general-purpose exascale system will arrive, "but special-purpose will probably come first."

Before exascale arrives, petaflop systems will continue to grow in size, and government-funded efforts to build massive systems seems to be on the rise. Fujitsu is planning a 10-petaflop computer in 2011 for Japan's Institute of Physical and Chemical Research, and China has now reached petaflop scale. Governments appear to be more willing to fund large systems, and an international race may be starting to build systems capable of solving some of the world's most pressing problems.

Join the CIO Australia group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags supercomputers

More about Advanced Micro Devices Far EastAdvanced Micro Devices Far EastAMDCrayFujitsuIBM AustraliaIBM AustraliaIntelJaguarOak Ridge National Laboratory

Show Comments
Computerworld
ARN
Techworld
CMO