Page 1 of 1

Scientists put graphics card to work.

Posted: 2005-06-10 01:08pm
by GrandMasterTerwynn
In running simulations.
New Scientist . . . yeah, yeah, yeah. wrote: Computer graphics card simulates supernova collapse

* 16:57 10 June 2005
* NewScientist.com news service
* Will Knight

Web Links

* Advanced Computing Lab, Los Alamos National Laboratory
* Terascale Supernova Initiative
* Computer Science, California Institute of Technology

New software enabling scientists to perform mind-boggling mathematical calculations and see the results rendered almost instantly on their screens has been released by US researchers.

The Scout programming language, developed at Los Alamos National Laboratory (LANL) in California, US, lets scientists run complex calculations on a computer's graphics processing unit (GPU) instead of its central processing unit (CPU).

In tests, the graphics processor was able to perform certain types of calculation 12 times faster than a single CPU.

Graphics processors generate smooth and realistic three-dimensional imagery by performing rapid calculations on visual data. And the latest graphics chips rival CPUs for raw processing power, thanks to consumer demand for hardware powerful enough to support the latest 3D computer games.

"These chips normally sit idle when scientists work," says Patrick McCormick, a LANL researcher. "They have all this processing power but it's just not being used."

McCormick says researchers could use the Scout programming language to simulate various phenomena, such as ocean currents and the formation of galaxies. He adds that performing these calculations on a graphics processor makes it simple to render simulations visually at the same time.

Super-giant star

Researchers at LANL have already tested Scout by modelling a critical moment during a particularly spectacular astronomical event: a “core-collapse supernova”. The simulations ran 12 times faster than they do on a single CPU, McCormick says, primarily because the problem is so well suited to a graphics processors' capabilities.

The researchers simulated the shockwave produced after the core of a super-giant star collapses upon itself. The collapse occurs when a gravitationally unstable iron core has been generated by fusion reactions inside the star. A video produced by LANL shows the Scout code used to model the shockwave, alongside a graphical representation of the process.

To make the technology much more powerful, McCormick is working on a version of Scout that will work when several computers are linked together.

Floating point

Peter Schröder, a computer simulation expert at the California Institute of Technology, believes graphics processors have great potential for scientific research. "There is a real market driving this hardware that we can use for scientific computation," he told New Scientist.

Schröder adds that the approach is particularly well suited to "anything that has high floating-point needs with low communication needs" – in other words intensive mathematical calculations that can be easily split up into individual portions. This is because graphics chips contain many individual processing cores that are suited to performing intensive calculations on their own.

But some experts say graphics chips’ design means they will not perform as well as CPUs on less specialised tasks. "For general-purpose scientific computing, GPUs have not proven themselves yet," says Jack Dongarra, a supercomputing expert at the University of Tennessee, US.

Computer Science, University of Tennessee
Really neat shit there. Makes sense, since a GPU is really nothing more than an enormous highly parallel math processor which specializes in intensive matrix and vector mathematics.

Posted: 2005-06-10 01:11pm
by Mr Bean
Just a case of scientist waking up to the untaped computer power avaible to them. This realy drives it home though
But some experts say graphics chips’ design means they will not perform as well as CPUs on less specialised tasks. "For general-purpose scientific computing, GPUs have not proven themselves yet," says Jack Dongarra, a supercomputing expert at the University of Tennessee, US.
Mostly becauseMr Dongarra, GPU's are not MENT for for general-purpose computing, they are MENT for highly specilized tasks dealing with very pretty and phyiscaly correct light, shadding and whatnot.

Posted: 2005-06-10 01:18pm
by Xon
There is also the small matter of the low precision of GPUs floatpoint operations.

GPUs are designed to crunch numbers were it doesnt matter if the lower(lets say) 5 significant digits are off. No one cares if a pixel is 0.5 pixels out of alignment.

But most industries interested in high preformance vector computing cant afford that level of error

Posted: 2005-06-10 03:06pm
by Durandal
GPU's are parallel processing beasts, so the potential for scientific calculation is great. But they don't support double-precision FP at the moment. However, WGF will require integer and DPFP implemented in hardware on the GPU for acceleration, so this situation will change once Longhorn comes around.

At that point, the days of SIMD implementations on the CPU will be numbered. Why waste real estate on the die when you've got a dedicated GPU that sits relatively idle 90% of the time and can do a ton of parallel calculations much faster than any CPU SIMD?

Posted: 2005-06-10 03:28pm
by The Grim Squeaker
Wait, let me get this straight,
Until now for intensive Graphical simulations they havent used Graphic cards :?:

Posted: 2005-06-10 03:46pm
by GrandMasterTerwynn
the .303 bookworm wrote:Wait, let me get this straight,
Until now for intensive Graphical simulations they havent used Graphic cards :?:
Powerful GPUs of the scale needed for this sort of work have really only come about in the past few years.

Before, you had ordinary video cards, which didn't have much processing power to speak of. Then you had 2D accelerators, which were ordinary video cards with more of the drawing stuff, that the CPU would've handled, passed off to them. Then you had 3D accelerators, but they were constrained by small memory sizes, and narrow and slow data pipes.

Then you had GPUs sitting on their own dedicated bus (AGP) which still has quite a bit of a data-moving bottleneck. But the GPU on the card got wider and wider internal data pipes (32 bit became 64 bit, which became 128 bit, and is currently at 256 bits wide) coupled with blisteringly fast memory at 64 MB, then 128 MB, up to 256 MB.

Now you have PCI-x cards which don't have the bottleneck issues that AGP does. And future video cards will have greater processing capability (more precision, and so on.)

So, again, really, it's only been within the last few years that the GPU had enough kick to it to make something like LANL's SCOUT package worthwhile.

Posted: 2005-06-10 04:02pm
by The Grim Squeaker
Huh, thanks.

?
Researchers at LANL have already tested Scout by modelling a critical moment during a particularly spectacular astronomical event: a “core-collapse supernova”. The simulations ran 12 times faster
How much time would it have taken without the GPU then? Months, days