Countdown to Power Down
David Lowenthal, associate department head for computer science at the UA, has received a grant to develop a system that would help reduce inefficiencies in supercomputers.

By La Monica Everett-Haynes, University Communications
Dec. 6, 2012


The clock is running out: The U.S. Department of Energy has charged the scientific community with helping to reduce energy associated with computing power while a global race is under way to reach exascale computing, the next generation of supercomputers.

But to reach high performance with lower energy output requires critical adjustments in supercomputers.

"What we want is to be able to have machines that can compute 10 to the 18th floating point operations per second by the end of the decade," said David Lowenthal, associate department head for computer science at the University of Arizona.

The current peak is 10 to the 16th. But in expanding supercomputing, the goal is to use no more than 20 megawatts, by the federal energy department's standards.

"The problem today is that we're a factor of two from the power limit. It doesn't really seem like a happy situation that we have to increase performance 50 fold, but power by only two," said Lowenthal, who has just received a $400,000 National Science Foundation grant funded through August 2015.

Hardware improvements can only help so much.

"Surely, hardware will become more power efficient, but it won't account for a factor of 50 versus two," Lowenthal said, adding that the team's work ultimately will inform government agencies, national laboratories and companies that are building large-scale supercomputers.

Under the grant, Lowenthal and his team of researchers intend to develop a software system that will automatically allow applications to achieve high performance via intelligent allocation of the available power.

But you don't think this applies to you?

In addition to issues of power efficiency, improvements to supercomputing and in the area of exascale computing have important applied and tangible implications.

Consider that supercomputers are utilized to help scientists understand climate change and weather patterns, improve medical diagnoses, study how supernova explosion work, investigate cellular behavior and design advanced aircraft, among other things.

In fact, in a recent interview with Forbes, U.S. Department of Energy Secretary Steven Chu explained that the federal agency is emphasizing the further development of supercomputers.

"Previously, scientists had two pillars of understanding: theory and experiment. Now there is a third pillar: simulation. Scientists can now simulate live situations with all of their complexity and begin to get answers that can be verified in the real world," Chu noted in the interview. "This experimentation in a computer is the third leg of technological development."

Also, the Department of Energy announced in November that it is aiming to build an exascale system by 2022.

Lowenthal, in collaboration with UA graduate students Tapasya Patki and Peter Bailey, will build software systems and investigate how to execute programs as quickly as possible while training systems to budget power use.

Lowenthal calls this power-constrained computing.

"Before, we just wasted power. You had enough power in the room that you didn't care if your program was running inefficiently. Now we care," he said.

So, the effort of his team is not to try and minimize the work of the machines, but cap the amount of power they are able to use. Employing a software program that will serve as a power monitor, of sorts, will enable the overall computer to allocate power in the moment.

The anticipated outcome is improved performance while honoring the specified power cap.

"What I want to figure out is how do we allocate power to different machines? But how do we do that so the program that needs to be executed also runs as fast as possible? This proposal is 100 percent targeted at answering those questions," Lowenthal said.

The team also will consult with applied scientists to best determine how, through coding mechanisms, systems can be built to be even more power efficient.

"A machine with 100,000 processors is not uncommon, but the goal is to solve larger and larger computational problems – but there is a horsepower issue," said Lowenthal, who had earned other NSF grants in the past for comparable research projects.

"There is infinite demand from the applied science side, and they can always use more computing power," Lowenthal said. "Generally speaking, over the last 20 to 30 years, the way we have gotten more and more computing power is by building larger and larger machines with more processors."

That cannot be a common practice in the future, he said.

"Aside from the environmental aspect, which many people care about, power and performance are tied," Lowenthal said.

"So if we take today's technology and build something 50 times larger, it's going to take more power than is realistic. You can't get that many machines into a room," he also said. "Today, we pretty much have enough power to power our machines but, going forward, it is going to be between unlikely and impossible that we will have enough power."

Extra info

David Lowenthal 

UA Department of Computer Science

dkl1@email.arizona.edu

Share