AMD questions current multi-core trend[AMD misc]

GEC: Discuss gaming, computers and electronics and venture into the bizarre world of STGODs.

Moderator: Thanas

Post Reply
User avatar
Ace Pace
Hardware Lover
Posts: 8456
Joined: 2002-07-07 03:04am
Location: Wasting time instead of money
Contact:

AMD questions current multi-core trend[AMD misc]

Post by Ace Pace »

Toms hardware.
AMD chief technology officer Phil Hester voiced concerns over the implications of the trend to integrate an increasing number of similar cores into one package. The company indicated that it could part with a strategy of using dozens of cores in one CPU and would turn to developing "Accelerated Processing Units," short APUs instead.



We just learned about the benefits of having multiple cores in one processor package. If you are using multithreaded software, you could see not only substantial performance jumps with dual- or quad-core processors, but also a power consumption that is about level or less than what we have seen in recent years. At its annual meeting with financial analysts today, the company however warned that the excitement for multi-core may be somewhat short-lived.

"In the early 2000s, we became side-tracked with the Gigahertz race, which has been a mistake." The same could happen with multi-core processors, Hester stated: "The industry can be side-tracked by the number of cores in CPUs." Claiming that "homogenous multi-core" will become "increasingly inadequate" over the next few years, AMD is putting its bets on a new type of processor, which it calls APU. An APU is based on a building-block idea that was first brought to live with the integrated memory controller: Similarly, AMD plans to leverage ATI's graphics technology as an integral part of future (heterogeneous) multi-core processors that are believe to bring more performance and increased power efficiency.

Image

AMD typically refers to this concept currently as "Fusion". Roadmaps presented at the event show that the company will introduce such a chip in the mobile computing space in the 2009 time frame. However, the company plans to deploy is concept in what Hester called a "collection of special purpose hardware" ranging from the PDA to peta-Flop capable devices. The key idea behind that thought is that the integration of a GPU can scale from entry-level to high-end devices because of the different capabilities of a graphics chip. On the one side, a GPU is just that - a graphics processing unit. But because of its enormous number crunching capability, you could see a (different) Fusion processor with a GPU that would primarily act as "stream processor" that is aimed at floating-point-heavy applications in deskside supercomputers as well.

If AMD has its way, than we could be going from a universal computing device today to very specialized and targeted hardware by 2009 and beyond. Fusion is one of the first applications that will leverage AMD's Torrenza platform and a new trend that the company describes as "accelerated processing era." Details of specific products have not been announced.

2007/2008 roadmap: DDR3 coming to the desktop in 2008

AMD also provided a few more details about upcoming products. On the server side, the company said it will introduce the "Shanghai" processor in early 2008, as an update of the "Barcelona" quad-core, which is scheduled for a mid-2007 release. Barcelona will be the first CPU to use Hypertransport 3, while it will keep using DDR2 memory and maintain a power envelope between 68 and 120 watts.

On the desktop there were several new announcements for 2008: The company will transition to socket AM3, which will be using DDR3 memory. Single-, dual- and quad-core will cover the company's product portfolio from entry-level machines to enthusiast computers. For 2007, we will see the introduction of Hypertransport 3, a continued offering (and "investment", according to AMD) into the dual-socket technology QuadFX as well as new chipsets and graphics: The graphics unit is set to introduce its first DirectX10 graphics chip codenamed "R600," which will go head-to-head with Nvidia's GeForce 8800 series. DirectX 10 will be supported by AMD chipsets later in 2007, the company said.

There were some interesting mobile announcements as well. AMD will be introducing new power-optimized graphics solutions for notebooks that adapt to scenarios in which a notebook is running on different power sources. A technology which it calls "dynamic graphics mode" will allow a notebook to use a discrete graphics card when connected to power plug and shift to integrated graphics when on battery mode. This technology, of course, requires that a mobile computer carries both integrated and discrete graphics in one case.
Brotherhood of the Bear | HAB | Mess | SDnet archivist |
User avatar
MKSheppard
Ruthless Genocidal Warmonger
Ruthless Genocidal Warmonger
Posts: 29842
Joined: 2002-07-06 06:34pm

Post by MKSheppard »

See them lose this round to intel.
"If scientists and inventors who develop disease cures and useful technologies don't get lifetime royalties, I'd like to know what fucking rationale you have for some guy getting lifetime royalties for writing an episode of Full House." - Mike Wong

"The present air situation in the Pacific is entirely the result of fighting a fifth rate air power." - U.S. Navy Memo - 24 July 1944
User avatar
Ace Pace
Hardware Lover
Posts: 8456
Joined: 2002-07-07 03:04am
Location: Wasting time instead of money
Contact:

Post by Ace Pace »

MKSheppard wrote:See them lose this round to intel.
:roll: Sure, because thats why they're losing and not because in practice, they ARE a small company competing against the largest chip company in the world with more design teams then AMD has staff. 8)

Intel is currently winning due to mostly one thing, and thats not spectactular long term planning, but due to intel haifa saving the entire companys consumer ass.
Brotherhood of the Bear | HAB | Mess | SDnet archivist |
User avatar
Covenant
Sith Marauder
Posts: 4451
Joined: 2006-04-11 07:43am

Post by Covenant »

Do these dual and multicore machines actually go any faster? If I get a dual 2.66 is is ACTUALLY going any faster than my goofy hyperthreaded 3.2 ?
User avatar
Ace Pace
Hardware Lover
Posts: 8456
Joined: 2002-07-07 03:04am
Location: Wasting time instead of money
Contact:

Post by Ace Pace »

Covenant wrote:Do these dual and multicore machines actually go any faster? If I get a dual 2.66 is is ACTUALLY going any faster than my goofy hyperthreaded 3.2 ?
Not with currentl single threaded games, but it will go smoother. No more single task locking up the PC.
Soon, more multitasking programs will arrive(I'm ignoring already multitasking media creation stuff) and then you'll see preformance boosts in large numbers.
Brotherhood of the Bear | HAB | Mess | SDnet archivist |
Ypoknons
Jedi Knight
Posts: 999
Joined: 2003-05-13 06:02am
Location: Manhattan (school year), Hong Kong (vacations)
Contact:

Post by Ypoknons »

Multi-tasking is a big factor. Virus scanning while playing games, for example.
User avatar
Arrow
Jedi Council Member
Posts: 2283
Joined: 2003-01-12 09:14pm

Post by Arrow »

Ace Pace wrote:
Covenant wrote:Do these dual and multicore machines actually go any faster? If I get a dual 2.66 is is ACTUALLY going any faster than my goofy hyperthreaded 3.2 ?
Not with currentl single threaded games, but it will go smoother. No more single task locking up the PC.
Soon, more multitasking programs will arrive(I'm ignoring already multitasking media creation stuff) and then you'll see preformance boosts in large numbers.
<slaps Ace with wet towel> multiTHREADED!!!!!
User avatar
MKSheppard
Ruthless Genocidal Warmonger
Ruthless Genocidal Warmonger
Posts: 29842
Joined: 2002-07-06 06:34pm

Post by MKSheppard »

Ace Pace wrote:Sure, because thats why they're losing and not because in practice, they ARE a small company competing against the largest chip company in the world with more design teams then AMD has staff. 8)
They kicked Intel's ass around the block for a couple of years; do I have to point out the waffle irons that Intel's chips were?

However, they're being really stupid in going away from multicoring; as it makes a lot of things feasible for games and applications; like being able to render a 3d image in 2 hours as opposed to 8, or making Silent Hunter actually run smooth in 4096 time compression.
"If scientists and inventors who develop disease cures and useful technologies don't get lifetime royalties, I'd like to know what fucking rationale you have for some guy getting lifetime royalties for writing an episode of Full House." - Mike Wong

"The present air situation in the Pacific is entirely the result of fighting a fifth rate air power." - U.S. Navy Memo - 24 July 1944
User avatar
Beowulf
The Patrician
Posts: 10621
Joined: 2002-07-04 01:18am
Location: 32ULV

Post by Beowulf »

Ace Pace wrote:
Covenant wrote:Do these dual and multicore machines actually go any faster? If I get a dual 2.66 is is ACTUALLY going any faster than my goofy hyperthreaded 3.2 ?
Not with currentl single threaded games, but it will go smoother. No more single task locking up the PC.
Soon, more multitasking programs will arrive(I'm ignoring already multitasking media creation stuff) and then you'll see preformance boosts in large numbers.
Well, actually, the answer is maybe. Are you comparing a Pentium D to a P4? or are you comparing a C2D to a P4? A 2.66 C2D is actually faster in single tasking than a 3.2 P4.
"preemptive killing of cops might not be such a bad idea from a personal saftey[sic] standpoint..." --Keevan Colton
"There's a word for bias you can't see: Yours." -- William Saletan
User avatar
phongn
Rebel Leader
Posts: 18487
Joined: 2002-07-03 11:11pm

Post by phongn »

MKSheppard wrote:However, they're being really stupid in going away from multicoring; as it makes a lot of things feasible for games and applications; like being able to render a 3d image in 2 hours as opposed to 8, or making Silent Hunter actually run smooth in 4096 time compression.
Making SH run smooth in time compression doesn't sound like a task that multicore will help much in. That sounds like a serial performance bottleneck to me.
User avatar
Admiral Valdemar
Outside Context Problem
Posts: 31572
Joined: 2002-07-04 07:17pm
Location: UK

Post by Admiral Valdemar »

What exactly is the difference between hyperthreading and dual/multiple core chips?
User avatar
Arrow
Jedi Council Member
Posts: 2283
Joined: 2003-01-12 09:14pm

Post by Arrow »

Admiral Valdemar wrote:What exactly is the difference between hyperthreading and dual/multiple core chips?
As I understand it, some of the resources of a single core are duplicated, allowing a single core to act as two cores and gives some speed gains in a multithreaded environment. It's basically faking it. Dual/multi core puts more full-up cores on the chip. Obviously, dual/multi core is superior.
Last edited by Arrow on 2006-12-14 05:55pm, edited 1 time in total.
User avatar
atg
Jedi Master
Posts: 1418
Joined: 2005-04-20 09:23pm
Location: Adelaide, Australia

Post by atg »

Admiral Valdemar wrote:What exactly is the difference between hyperthreading and dual/multiple core chips?
A quick summary.

Hyperthreading is a trick making the OS think there is two or more cores, the Intel chips that used it could supposedly run quicker as a result, but there were notible roblems with some major programs performing remarkably slower with hyperthreading enabled. I think at least one motherboard maufacturer ended up having it turned off by default in the motherboard settings as a result.

Dual/Multicore chips are litterally two or more physical processors in the one package.
Marcus Aurelius: ...the Swedish S-tank; the exception is made mostly because the Swedes insisted really hard that it is a tank rather than a tank destroyer or assault gun
Ilya Muromets: And now I have this image of a massive, stern-looking Swede staring down a bunch of military nerds. "It's a tank." "Uh, yes Sir. Please don't hurt us."
User avatar
Covenant
Sith Marauder
Posts: 4451
Joined: 2006-04-11 07:43am

Post by Covenant »

atg wrote:
Admiral Valdemar wrote:What exactly is the difference between hyperthreading and dual/multiple core chips?
A quick summary.

Hyperthreading is a trick making the OS think there is two or more cores, the Intel chips that used it could supposedly run quicker as a result, but there were notible roblems with some major programs performing remarkably slower with hyperthreading enabled. I think at least one motherboard maufacturer ended up having it turned off by default in the motherboard settings as a result.

Dual/Multicore chips are litterally two or more physical processors in the one package.
Specifically, hyperthreading at times forces my computer to top out at 50 percent processor usage. This is never an issue for games, and is often a boon for what I do in graphical stuff where I can specify how many processors I want it to use (and thus allow 50 percent of my resources to remain free and not lock up my box) but on software that doesn't do it right, it won't work right and may just be stupid.
User avatar
phongn
Rebel Leader
Posts: 18487
Joined: 2002-07-03 11:11pm

Post by phongn »

atg wrote:Hyperthreading is a trick making the OS think there is two or more cores, the Intel chips that used it could supposedly run quicker as a result, but there were notible roblems with some major programs performing remarkably slower with hyperthreading enabled. I think at least one motherboard maufacturer ended up having it turned off by default in the motherboard settings as a result.
That was mostly due to the Replay Engine in the Pentium 4 (and a problem that was resolved in Prescott).

Simultaneous Multithreading (HT is Intel's trade name) permits a processor to simultaneously execute instructions from multiple threads at once. This improves efficiency in multithreaded workloads since most processors have tons of wasted resources for most workloads. The OS sees multiple logical (but not physical) processors for scheduling purposes.
Covenant wrote:Specifically, hyperthreading at times forces my computer to top out at 50 percent processor usage.
What?
This is never an issue for games, and is often a boon for what I do in graphical stuff where I can specify how many processors I want it to use (and thus allow 50 percent of my resources to remain free and not lock up my box) but on software that doesn't do it right, it won't work right and may just be stupid.
That sounds totally nonsensical.
User avatar
Arthur_Tuxedo
Sith Acolyte
Posts: 5637
Joined: 2002-07-23 03:28am
Location: San Francisco, California

Post by Arthur_Tuxedo »

I think what AMD is saying is not that they will stop adding cores, but that the cores will be more specialized. Instead of 8 identical cores, you could have 4 for general processing and 4 for different types of heavy number crunching. Seems like GPU's and CPU's are becoming more like each other if this idea pans out. Earlier I pooh-poohed CPU/GPU hybrids for performance systems because you couldn't stack enough GPU power on something that will fit in a CPU socket, but eventually card-based CPU/GPU hybrids might make sense.
"I'm so fast that last night I turned off the light switch in my hotel room and was in bed before the room was dark." - Muhammad Ali

"Dating is not supposed to be easy. It's supposed to be a heart-pounding, stomach-wrenching, gut-churning exercise in pitting your fear of rejection and public humiliation against your desire to find a mate. Enjoy." - Darth Wong
User avatar
Covenant
Sith Marauder
Posts: 4451
Joined: 2006-04-11 07:43am

Post by Covenant »

It's true. I can tell my application to use one processor when it's rendering or whatnot and it'll top out 50% processor power and not use all of it. I know it for a fact, since when I set this it goes to 50 and when I do not it goes to around 98 percent.

And in other circumstances, my processor power usage will also top off at 50 percent and hover right at there as a program chugs along, and I need to go in and tell it to use both. I think it's a nice feature, it's like giving me a segment of my processor that's never in use unless I let it.
User avatar
Ace Pace
Hardware Lover
Posts: 8456
Joined: 2002-07-07 03:04am
Location: Wasting time instead of money
Contact:

Post by Ace Pace »

Arrow wrote:
<slaps Ace with wet towel> multiTHREADED!!!!!
Bah, I was tired!
MKSheppard wrote:
They kicked Intel's ass around the block for a couple of years; do I have to point out the waffle irons that Intel's chips were?
As I said, intel Haifa saved Intels ass.
However, they're being really stupid in going away from multicoring; as it makes a lot of things feasible for games and applications; like being able to render a 3d image in 2 hours as opposed to 8, or making Silent Hunter actually run smooth in 4096 time compression.
Read properly, no one is saying no to multicore, only that they do not think it's the holy grail, the same attitude they took in relation to clockspeed.
Making SH run smooth in time compression doesn't sound like a task that multicore will help much in. That sounds like a serial performance bottleneck to me.
Depends on how they do it, technically, the problem with Silent Hunter running smoothly should be solveable with dual core(split up the calcs for each ship into a thread?).
Brotherhood of the Bear | HAB | Mess | SDnet archivist |
User avatar
atg
Jedi Master
Posts: 1418
Joined: 2005-04-20 09:23pm
Location: Adelaide, Australia

Post by atg »

Arthur_Tuxedo wrote:I think what AMD is saying is not that they will stop adding cores, but that the cores will be more specialized. Instead of 8 identical cores, you could have 4 for general processing and 4 for different types of heavy number crunching. Seems like GPU's and CPU's are becoming more like each other if this idea pans out. Earlier I pooh-poohed CPU/GPU hybrids for performance systems because you couldn't stack enough GPU power on something that will fit in a CPU socket, but eventually card-based CPU/GPU hybrids might make sense.
I posted about this a while back after meeting with an AMD rep. What he specifically told me was that they were aiming for a chip with 4 Athlon64 cores, and four FPGA cores, that can be programmed by software to specialise in certain ares, ie AI in a game.
Marcus Aurelius: ...the Swedish S-tank; the exception is made mostly because the Swedes insisted really hard that it is a tank rather than a tank destroyer or assault gun
Ilya Muromets: And now I have this image of a massive, stern-looking Swede staring down a bunch of military nerds. "It's a tank." "Uh, yes Sir. Please don't hurt us."
User avatar
Arrow
Jedi Council Member
Posts: 2283
Joined: 2003-01-12 09:14pm

Post by Arrow »

atg wrote:I posted about this a while back after meeting with an AMD rep. What he specifically told me was that they were aiming for a chip with 4 Athlon64 cores, and four FPGA cores, that can be programmed by software to specialise in certain ares, ie AI in a game.
WTF?!?! FPGA cores? Any idea how big? And did AMD figure out some technical way to drive down the price, or are the going purely with economies of scale? Because give the cost for higher end Xilinx chips, I can easily see such a beast costing as much or more than today's latest, greatest gaming rig.

Did he also say anything about programming tools for these FPGAs? Because VHDL, Verilog and tools like System Generator aren't going to cut it with most game developers, just because of the specialized knowledge and massive amounts of time needed to use those tools and test the resulting images. Would it be something like Impulse C (which is C modified to support true parallel execution)?
Tiger Ace
Jedi Knight
Posts: 627
Joined: 2005-04-07 02:03am
Location: AWAY

Post by Tiger Ace »

Arrow wrote:
WTF?!?! FPGA cores? Any idea how big? And did AMD figure out some technical way to drive down the price, or are the going purely with economies of scale? Because give the cost for higher end Xilinx chips, I can easily see such a beast costing as much or more than today's latest, greatest gaming rig.

Did he also say anything about programming tools for these FPGAs? Because VHDL, Verilog and tools like System Generator aren't going to cut it with most game developers, just because of the specialized knowledge and massive amounts of time needed to use those tools and test the resulting images. Would it be something like Impulse C (which is C modified to support true parallel execution)?
Thank SDN for getting me out of family get togethers.

On topic, I don't expect game developers to be the ones programming these chips, but rather you'd find new middleware developers who will license their FPGA designs to game developers, who will use it as is.

However, I very much doubt this pie in the sky plan of 4 FPGAs will happen before 2010, AMD dosn't have enough capacity for its current chips, and the die size will go up if they start adding stuff to it.

Nevermind what Fusion diesizes will be like, it'll either be less cache for the CPUs with GPUs bundled in, or higher prices.
Useless geek posting above.

Its Ace Pace.
User avatar
Arrow
Jedi Council Member
Posts: 2283
Joined: 2003-01-12 09:14pm

Post by Arrow »

Tiger Ace wrote:Thank SDN for getting me out of family get togethers.

On topic, I don't expect game developers to be the ones programming these chips, but rather you'd find new middleware developers who will license their FPGA designs to game developers, who will use it as is.
Then what's the point of using FPGA, when you can use dedicated silicon instead? You already have Aegia's physic chip, and another company (I forget the name) is working on an AI chip. The whole point of using FPGAs is because they are reconfigurable and let you use custom logic without custom hardware. Not exposing that to a game developer would make the FPGAs a waste.
However, I very much doubt this pie in the sky plan of 4 FPGAs will happen before 2010, AMD dosn't have enough capacity for its current chips, and the die size will go up if they start adding stuff to it.

Nevermind what Fusion diesizes will be like, it'll either be less cache for the CPUs with GPUs bundled in, or higher prices.
You realizing you're talking about the company that made 4x4. If they want to push the issue, they can do it. I just don't want to see the heatsinks and PSU you'll need. Edit: Not that you'll see it this year, but if they're already working it, such a chip could be out in 2008.
Tiger Ace
Jedi Knight
Posts: 627
Joined: 2005-04-07 02:03am
Location: AWAY

Post by Tiger Ace »

Arrow wrote: Then what's the point of using FPGA, when you can use dedicated silicon instead? You already have Aegia's physic chip, and another company (I forget the name) is working on an AI chip. The whole point of using FPGAs is because they are reconfigurable and let you use custom logic without custom hardware. Not exposing that to a game developer would make the FPGAs a waste.
I never said Devs wouldn't use it, but i expect they'd prefer to get stuff from other companies who would program those FPGAs as AIs, sound or whatever procesors and game developers could configure something as needed. As you said, it's quite complicated to program and I doubt there are many game developers with experiance in this.
Hell, it'll happen faster then Game engines middleware happened.
You realizing you're talking about the company that made 4x4. If they want to push the issue, they can do it. I just don't want to see the heatsinks and PSU you'll need. Edit: Not that you'll see it this year, but if they're already working it, such a chip could be out in 2008.
If they do it, they'll be wasting more of their fab capacity when they could make a killing just by actully delivering more to retail.
AMDs primary stumbling block is capacity, even with far more efficient fabs, they can't keep up with demand and thats hurting them. If they start creating chips which tie up more capacity, I'm not seeing how that improves the bottom lines of anyone except Intel.
Useless geek posting above.

Its Ace Pace.
User avatar
Arrow
Jedi Council Member
Posts: 2283
Joined: 2003-01-12 09:14pm

Post by Arrow »

Tiger Ace wrote:I never said Devs wouldn't use it, but i expect they'd prefer to get stuff from other companies who would program those FPGAs as AIs, sound or whatever procesors and game developers could configure something as needed. As you said, it's quite complicated to program and I doubt there are many game developers with experiance in this.
Hell, it'll happen faster then Game engines middleware happened.
Since I can see and make arguments on both sides of the issue, I'm going to table this for now. I want to learn more about how AMD plans on integrating FPGAs into their chips, before I make more comments.
If they do it, they'll be wasting more of their fab capacity when they could make a killing just by actully delivering more to retail.
AMDs primary stumbling block is capacity, even with far more efficient fabs, they can't keep up with demand and thats hurting them. If they start creating chips which tie up more capacity, I'm not seeing how that improves the bottom lines of anyone except Intel.
Just meeting your existing demand doesn't make you leader and innovator; AMD needs to keep pushing new technologies if they want to be seen as an alternative to Intel, and honestly, its the only way they can compete with Intel.
User avatar
atg
Jedi Master
Posts: 1418
Joined: 2005-04-20 09:23pm
Location: Adelaide, Australia

Post by atg »

Arrow wrote:
atg wrote:I posted about this a while back after meeting with an AMD rep. What he specifically told me was that they were aiming for a chip with 4 Athlon64 cores, and four FPGA cores, that can be programmed by software to specialise in certain ares, ie AI in a game.
WTF?!?! FPGA cores? Any idea how big? And did AMD figure out some technical way to drive down the price, or are the going purely with economies of scale? Because give the cost for higher end Xilinx chips, I can easily see such a beast costing as much or more than today's latest, greatest gaming rig.

Did he also say anything about programming tools for these FPGAs? Because VHDL, Verilog and tools like System Generator aren't going to cut it with most game developers, just because of the specialized knowledge and massive amounts of time needed to use those tools and test the resulting images. Would it be something like Impulse C (which is C modified to support true parallel execution)?
Detail on the chips we're a bit scarse, but he did say they we're looking for a release in late '07. With the 4x4 system I would think that initially their plan would be for two physical chips with two Athlon64 cores and two FPGA's in each.

I don't know much about the programming tools needed, and the rep didn't give any detail regarding this.
Marcus Aurelius: ...the Swedish S-tank; the exception is made mostly because the Swedes insisted really hard that it is a tank rather than a tank destroyer or assault gun
Ilya Muromets: And now I have this image of a massive, stern-looking Swede staring down a bunch of military nerds. "It's a tank." "Uh, yes Sir. Please don't hurt us."
Post Reply