Nvidia aims for fastest computer

15 May 2012 Last updated at 21:10 GMT By Leo Kelion Technology reporter, BBC News Titan supercomputer The US government plans to install nearly 19,000 of Nvidia's new GPUs into its Titan supercomputer later this year Chip maker Nvidia has revealed details of a new graphics processing unit (GPU) which it says will create the world's most powerful computer.

Thousands of the firm's Tesla K20 modules will be fitted to an existing supercomputer at Oak Ridge National Laboratory in Tennessee, US.

Nvidia says the machine will run nearly eight times faster than at present, carrying out up to 25,000 trillion floating point operations per second.

It marks the shift to hybrid computing.

Traditionally supercomputers relied on central processing units (CPUs) to carry out most of their calculations.

Continue reading the main story
A machine like Titan has a budget of around 10 megawatts, and that costs roughly $10m per year just for the electricity”

End Quote Steve Scott, Chief technology officer, Nvidia While CPUs tend to outpace GPUs at carrying out a single set of instructions, GPUs have an advantage in that they can carry out hundreds of tasks at the same time.

This makes GPUs particularly suited for what are termed "parallisable" jobs - processes that can be broken down into several parts and run simultaneously because the outcome of any one calculation does not determine the input of another.

Hybrid computing involves combining a system with CPUs and GPUs and then writing software to divide up work to best take advantage of each type of processor's strengths.

Parallel power

Oak Ridge is the US Department of Energy's biggest science laboratory and the souped-up computer is expected to be used to help develop more energy-efficient engines for vehicles, improved biofuels and to model climate change.

Time on the computer will also be rented to third parties.

Continue reading the main story

The concept of a special card to accelerate images drawn on screen dates back to the 1980s, although Nvidia coined the term to market one of its products in 1999.

As the name suggests, the original focus of the chips was to improve graphics performance whether to offer gamers more detailed animations or to help computers play video files.

But increasingly their makers are focusing on their suitability for other tasks.

The oil and gas industry is probably the biggest market for high-end GPUs. It uses them to help analyse seismic surveys to work out where best to drill to maximise the amount of fossil fuel that can be extracted.

Other popular uses include cryptanalysis, molecular modelling and biochemistry simulations.

In 2007, none of the world's most powerful 500 supercomputers made use of GPU-accelerated systems.

But last year the list included 35 systems and that number is expected to keep growing.

"If you take a look at scientific applications, 99% of the operations can be done in a highly parallel manner, and that can be done much more efficiently by large numbers of very simple GPU processors than on a traditional CPU burning a lot of power trying to make a single thread go fast," Steve Scott, Nvidia's chief technology officer, told the BBC.

"I liken CPUs to a Tour de France where a whole team of trucks and support staff are built around one athlete to help them win the race - a lot of energy making one thing go fast - as opposed to a parallel throughput approach where you make thousands of things in aggregate go fast."

Investment

Nvidia says the addition of its chips should allow Oak Ridge's Titan system to leapfrog from the world's third fastest supercomputer to the top spot.

But the extra speed comes at a cost.

The upgrade is expected to involve the addition of almost 19,000 Tesla K20s. Each is set to have a list price of between $1,500 and $2,000 (£930-£1,245), although the laboratory will get a discount for buying in bulk.

However, the investment will be partly offset by the fact that the machine should burn up less energy.

Cutting clock speeds

A focus on maximising performance per watt led Nvidia to take the unusual step of making the cores in its new Kepler architecture run about a third slower than their equivalents in its previous generation of chips.

Nvidia There will be more than 2,000 cores in the Kepler GPU that powers the Tesla K20

But because the cores use smaller transistors, more cores can be crammed on to each GPU - in this case more than 2,000 per processor.

Nvidia says that its technology will allow Titan to be more than twice as powerful as the current record holder- Fujitsu's K Computer in Japan - and also more than three times as energy efficient.

"A machine like Titan has a budget of around 10 megawatts, and that costs roughly $10m per year just for the electricity, so people are concerned about the electrical bills," said Mr Scott.

"They are also concerned about how much power they can provide to their facility as there is a limited amount of power you can get from the utilities.

"Oak Ridge is probably the best site in the world at providing additional power, but a lot of other centres are limited in their power and cooling infrastructure and so for them their facilities do constrain the amount of performance that they can get."


View the original article here

No comments:

Post a Comment