Question about ATI and Nvidia.
Moderator: Thanas
Question about ATI and Nvidia.
What is their status currently? Who is the market leader? Are they stable or is one of them looking to collapse?
The reason I ask is that depending who you talk to, either Nvidia or ATI is about to bite the dust.
I'm planning to get a decent vid card in the $100 to $150 range but I don't wnat to get one from a company that will go bust.
The reason I ask is that depending who you talk to, either Nvidia or ATI is about to bite the dust.
I'm planning to get a decent vid card in the $100 to $150 range but I don't wnat to get one from a company that will go bust.
ASVS('97)/SDN('03)
"Whilst human alchemists refer to the combustion triangle, some of their orcish counterparts see it as more of a hexagon: heat, fuel, air, laughter, screaming, fun." Dawn of the Dragons
ASSCRAVATS!
"Whilst human alchemists refer to the combustion triangle, some of their orcish counterparts see it as more of a hexagon: heat, fuel, air, laughter, screaming, fun." Dawn of the Dragons
ASSCRAVATS!
- Fingolfin_Noldor
- Emperor's Hand
- Posts: 11834
- Joined: 2006-05-15 10:36am
- Location: At the Helm of the HAB Star Dreadnaught Star Fist
Re: Question about ATI and Nvidia.
AMD and NVidia are doing fine at the moment, and I have yet to hear of AMD combusting yet, although AMD has lost quite a bit of money over the last year. AMD seems to be more bang for buck at the moment.
STGOD: Byzantine Empire
Your spirit, diseased as it is, refuses to allow you to give up, no matter what threats you face... and whatever wreckage you leave behind you.
Kreia
Your spirit, diseased as it is, refuses to allow you to give up, no matter what threats you face... and whatever wreckage you leave behind you.
Kreia
- Chris OFarrell
- Durandal's Bitch
- Posts: 5724
- Joined: 2002-08-02 07:57pm
- Contact:
Re: Question about ATI and Nvidia.
Intels 'Core' line has *really* hammered AMD...though ATI are making something of a strong comeback against Nividia right now which may help them a bit.
I also think AMD are still dragging their heals on going to 45nm, but I could be wrong there.
I also think AMD are still dragging their heals on going to 45nm, but I could be wrong there.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Question about ATI and Nvidia.
The new ATI products are arguably better than the Nvidia ones (they win on most performance metrics at most price points). The new Phenom II processors are finally competitive with Intel at low to medium price points, though of course they've still significantly behind in the high end and laptop space. AMD's finances are in a relatively good state considering, so I think they'll be ok for now.
Nvidia OTOH is screwed, because they don't have a response to the inevitable CPU-GPU integration taking place, their products have massive die sizes and hence production costs for their performance, they just took a major financial and goodwill hit from the rash of defective chips (underfill issue) and they're even losing market share in the chipset business. They may look superficially healthy but I'd judge Nvidia as more likely to go down in flames in the next five years.
Nvidia OTOH is screwed, because they don't have a response to the inevitable CPU-GPU integration taking place, their products have massive die sizes and hence production costs for their performance, they just took a major financial and goodwill hit from the rash of defective chips (underfill issue) and they're even losing market share in the chipset business. They may look superficially healthy but I'd judge Nvidia as more likely to go down in flames in the next five years.
Neither is going to go bust in the next year, probably two. That's as much as you can ask for in IT - and frankly it doesn't matter much once your card is out of warranty.I'm planning to get a decent vid card in the $100 to $150 range but I don't wnat to get one from a company that will go bust.
- Fingolfin_Noldor
- Emperor's Hand
- Posts: 11834
- Joined: 2006-05-15 10:36am
- Location: At the Helm of the HAB Star Dreadnaught Star Fist
Re: Question about ATI and Nvidia.
AMD has already launched 45nm Opterons, and Phenom II was recently launched.Chris OFarrell wrote:I also think AMD are still dragging their heals on going to 45nm, but I could be wrong there.
STGOD: Byzantine Empire
Your spirit, diseased as it is, refuses to allow you to give up, no matter what threats you face... and whatever wreckage you leave behind you.
Kreia
Your spirit, diseased as it is, refuses to allow you to give up, no matter what threats you face... and whatever wreckage you leave behind you.
Kreia
Re: Question about ATI and Nvidia.
ATI's card are better at most price points, true. Nvidia are doing slightly better up at the absolute top end, but it's not a place that most users would be thinking about. While Phenom II is somewhat decent, they're going to be in a really vulnerable position if Intel axes Core 2 prices, and we'll also be seeing affordable versions of Core i7 later this year.Starglider wrote:The new ATI products are arguably better than the Nvidia ones (they win on most performance metrics at most price points). The new Phenom II processors are finally competitive with Intel at low to medium price points, though of course they've still significantly behind in the high end and laptop space. AMD's finances are in a relatively good state considering, so I think they'll be ok for now.
Whoa, whoa, back up a bit there. The first CPU-GPU chips are going to be low-end products, used in cheaper systems and laptops. They're going to be the successors to integrated graphics chipsets on motherboards today, not high-end GPUs. There will still be a high-end GPU market for many years to come, especially when you consider that both Nvidia and ATI's top-end GPUs have roughly twice the transistor count of a Core i7, and hugely complex memory subsystems. With present and near-future manufacturing techniques, there's no way that CPU-GPU combo chips are going to sweep away add-on cards anytime soon.Nvidia OTOH is screwed, because they don't have a response to the inevitable CPU-GPU integration taking place, their products have massive die sizes and hence production costs for their performance, they just took a major financial and goodwill hit from the rash of defective chips (underfill issue) and they're even losing market share in the chipset business. They may look superficially healthy but I'd judge Nvidia as more likely to go down in flames in the next five years.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Question about ATI and Nvidia.
Gamers may consider them 'low end' but the fact of the matter is that 'low end' hardware (both chipset-integrated and low-cost OEM cards) accounts for something like ten times the volume of $100+ cards and two thirds the gross revenue. The price and performance advantage of package level integration is pretty much insurmountable at the low end. Meanwhile games are steadily approaching photorealism and the fraction of games who actually feel compelled to buy the latest shiny every year is slowly shrinking. There will always be a few but not enough to base a company on given how high chip development costs have gotten. Finally there is the question of whether Intel's Larrabee technology causes much of the developer effort to shift over to x86 based rendering engines. Frankly I hope it does (the architecture is far more suited to real time ray-tracing and non-graphics use). AMD can match that, albeit lagging a bit, but Nvidia don't have an instruction set license and would be screwed.DaveJB wrote:Whoa, whoa, back up a bit there. The first CPU-GPU chips are going to be low-end products, used in cheaper systems and laptops.
Unless Larrabee bombs the successor will be a highly parallel x86 chip that you can plug into a CPU socket and sits on the QuickPath or HyperTransport grid with the other CPUs. The chipset will only be required to handle frame buffer output into the video interface. AMD have been talking about this for years, Intel is finally making it happen, Nvidia are effectively locked out of the market. Unless Intel quite literally take pity on them and sell them the relevant licenses, it's going to stay that way.They're going to be the successors to integrated graphics chipsets on motherboards today, not high-end GPUs. There will still be a high-end GPU market for many years to come,
Specialised cache hierarchies and data pipelines will remain but the concept of specialised graphics memory (e.g. the GDDR series) is starting to die now and frankly, good riddance.and hugely complex memory subsystems.
True but GPUs in CPU sockets can and probably will.With present and near-future manufacturing techniques, there's no way that CPU-GPU combo chips are going to sweep away add-on cards anytime soon.
Re: Question about ATI and Nvidia.
We've had integrated graphics chipsets since the Intel 810 in 1999 (or even earlier - Cyrix were offering their MediaGX CPU with on-board graphics back in 1996). We've had basic gaming level integrated chipsets since nVidia introduced the nForce series back in 2001 or 2002. And yet the market for graphics boards, even the ones at low end, still exists.Starglider wrote:Gamers may consider them 'low end' but the fact of the matter is that 'low end' hardware (both chipset-integrated and low-cost OEM cards) accounts for something like ten times the volume of $100+ cards and two thirds the gross revenue. The price and performance advantage of package level integration is pretty much insurmountable at the low end.
So your entire argument is really predicated on the fact that Intel could turn the graphics industry upside-down with Larrabee. That's really a pretty bold prediction, considering that it took the industry years to go from having no 3D graphics at all in the pre-3DFX era until the point where a 3D card was mandatory (about 1999 for most games, IIRC).Meanwhile games are steadily approaching photorealism and the fraction of games who actually feel compelled to buy the latest shiny every year is slowly shrinking. There will always be a few but not enough to base a company on given how high chip development costs have gotten. Finally there is the question of whether Intel's Larrabee technology causes much of the developer effort to shift over to x86 based rendering engines. Frankly I hope it does (the architecture is far more suited to real time ray-tracing and non-graphics use). AMD can match that, albeit lagging a bit, but Nvidia don't have an instruction set license and would be screwed.
Seriously, take a look at what you're proposing. Since 3D graphics took off in a big way in 1996, about the biggest changes we've seen in the basic graphics card designs are the introduction of GPU processing, the changes in expansion slot design (PCI-AGP-PCIe), and maybe the use of add-on power connectors. You're predicting that within the next few years, the entire methodology of both graphics processor design and game engine design are suddenly going to completely change because... Intel wants that to happen?Unless Larrabee bombs the successor will be a highly parallel x86 chip that you can plug into a CPU socket and sits on the QuickPath or HyperTransport grid with the other CPUs. The chipset will only be required to handle frame buffer output into the video interface. AMD have been talking about this for years, Intel is finally making it happen, Nvidia are effectively locked out of the market. Unless Intel quite literally take pity on them and sell them the relevant licenses, it's going to stay that way.
Your entire argument smacks of technological determinism. Just because some new whizz-bang technology becomes available doesn't immediately mean everything that went before is suddenly going to become obsolete and get thrown in the bin. That was Intel's thinking when they bought out the Itanium, which hasn't even been a serious success even in the supercomputer industry, where it was intended for - to say nothing of desktop computing.
Prove it. As I stated above, we've had memory-sharing graphics chipsets for a decade now, and there will be no way any motherboard-integrated solution will match the memory performance of a dedicated card in the forseeable future.Specialised cache hierarchies and data pipelines will remain but the concept of specialised graphics memory (e.g. the GDDR series) is starting to die now and frankly, good riddance.
Why will it "probably" happen? Explain the benefit of this approach, and why it's likely to own current add-on board designs.True but GPUs in CPU sockets can and probably will.
Re: Question about ATI and Nvidia.
That market is slowly being strangled by integrated graphics processors which are getting better and better every generation. It's essentially a matter of time until integrated solutions become good enough for the vast majority of the market (including most game players) as more transistors become available and the line blurs between CPU and GPGPU. Ever notice what happened to the the sound card or network card market? Outside of the professional and high-end gamer market for the former and the server market for the latter, they essentially have disappeared. Integrated is good enough, and mark my words, it'll happen to the GPU market too.DaveJB wrote:We've had integrated graphics chipsets since the Intel 810 in 1999 (or even earlier - Cyrix were offering their MediaGX CPU with on-board graphics back in 1996). We've had basic gaming level integrated chipsets since nVidia introduced the nForce series back in 2001 or 2002. And yet the market for graphics boards, even the ones at low end, still exists.
This is not a new observation. It has been made back as far as 1968.
It probably will. Software architects will not longer have to deal with strange and esoteric instruction sets (the current GPGPU ones) and frameworks, they'll simply have x86. There are a huge number of tools for x86, and while legacy projects won't transition, it'll be incredibly attractive for new ones. After all, billions of dollars and decades of work have gone into make the (horrid) x86 ISA work very, very well. This is the same gambit Intel is doing with their Atom processor and their hopes of displacing ARM and PowerPC in the embedded world.So your entire argument is really predicated on the fact that Intel could turn the graphics industry upside-down with Larrabee. That's really a pretty bold prediction, considering that it took the industry years to go from having no 3D graphics at all in the pre-3DFX era until the point where a 3D card was mandatory (about 1999 for most games, IIRC).
No, he's saying that Intel's proposed x86-GPU will be "good enough," easier to program for, come with enough of an installed base to get its start and thus it will doom anything else.Seriously, take a look at what you're proposing. Since 3D graphics took off in a big way in 1996, about the biggest changes we've seen in the basic graphics card designs are the introduction of GPU processing, the changes in expansion slot design (PCI-AGP-PCIe), and maybe the use of add-on power connectors. You're predicting that within the next few years, the entire methodology of both graphics processor design and game engine design are suddenly going to completely change because... Intel wants that to happen?
IA64/VLIW/EPIC didn't offer anything particularly compelling for the market. x86-GPU will: the whole infrastructure around the x86 ISA.Your entire argument smacks of technological determinism. Just because some new whizz-bang technology becomes available doesn't immediately mean everything that went before is suddenly going to become obsolete and get thrown in the bin. That was Intel's thinking when they bought out the Itanium, which hasn't even been a serious success even in the supercomputer industry, where it was intended for - to say nothing of desktop computing.
Why not? CPUs have finally received access to very high performance memory and interconnects, the last real barrier. AGP's asymmetrical nature meant it could never be used for GPGPU, PCIe's bidirectional bandwidth let it be able to, and integration on-die or via a separate socket seems the next reasonable step.Prove it. As I stated above, we've had memory-sharing graphics chipsets for a decade now, and there will be no way any motherboard-integrated solution will match the memory performance of a dedicated card in the forseeable future.
Direct access to the CPU, for one, via Coherent HyperTransport or QuickPath Interconnect. Greater availability of system memory over high-speed links. Tighter integration.Why will it "probably" happen? Explain the benefit of this approach, and why it's likely to own current add-on board designs.
- Starglider
- Miles Dyson
- Posts: 8709
- Joined: 2007-04-05 09:44pm
- Location: Isle of Dogs
- Contact:
Re: Question about ATI and Nvidia.
Good example actually. Originally, all PC graphics (other than character mode) required huge plug-in cards (I remember the original VGA cards being monstrous double-stacked PCBs - something we didn't see again until very recently). It steadily migrated onto the chipset, to the point that by 2000 almost no one bothered buying a graphics card just for 2D graphics. The MediaGX even started to migrate graphics onto the CPU package for cost saving on low end systems. The only thing that saved discrete graphics was the sudden introduction of hardware 3D acceleration (formerly the domain of expensive workstations) into the PC and particularly gaming space. Initially the rate of progress was extreme, leaving cheap integrated solutions in the dust. CPUs didn't have the transistor budget do to it on chip and even chipsets had problems because standard DRAM sucked compared to the fast specialist DRAM on the graphics cards. It was harder to keep business users buying discrete graphics, but the threat of Vista and its 'need' for 3D acceleration kept it going a little longer.DaveJB wrote:We've had integrated graphics chipsets since the Intel 810 in 1999 (or even earlier - Cyrix were offering their MediaGX CPU with on-board graphics back in 1996).
Now however 3D graphics are reverting to the same trend 2D ones followed. Transistor budgets for CPUs are now so ample that we're facing core inflation - and it's hard to make a credible argument that consumers need more than four cores (it's hard enough to argue for four right now). DDR3 is so fast, particularly in triple-channel i7 configuration, that specialised memories are marginally useful. New applications of GPU-style hardware, e.g. physics, are exposing the limitations of hanging the co-processor off a PCI bus on the other side of a bridge, instead of directly connecting it to the processor.
But it has been growing at a rate considerably slower than the growth of PC hardware sales as a whole, and may soon start to decline.We've had basic gaming level integrated chipsets since nVidia introduced the nForce series back in 2001 or 2002. And yet the market for graphics boards, even the ones at low end, still exists.
Nope. Nvidia is going to be squeezed at the low end regardless; in fact it's already happening, as shown by their rapidly declining profits. However if Larrabee fails then they have a chance to survive at the high end, if they can keep product design costs under control. If it succeeds they're in real trouble.So your entire argument is really predicated on the fact that Intel could turn the graphics industry upside-down with Larrabee.
So what? Voodoo was a speciality card for gamers. Software rendering was nearly as good to start with, workstation stuff was its own world. It took years to build demand because people had to be sold on the whole concept of a 3D card. Larrabee isn't being pushed by some tiny startup; it's being rolled out by the giant that virtually dictates the shape of the PC industry. Intel can certainly fail, e.g. RDRAM and Itanium, but those were products that fundamentally sucked. By contrast Larabee is looking good so far, lots of 3D engine designers are excited about it, it's selling into an established market and Intel has plenty of pricing power to make it attractive at launch.That's really a pretty bold prediction, considering that it took the industry years to go from having no 3D graphics at all in the pre-3DFX era until the point where a 3D card was mandatory (about 1999 for most games, IIRC).
I'm not proposing it. Intel (and to a lesser extent AMD) are proposing it, and quite a few expert programmers are saying 'wow, just what we always wanted'. Understandably.Seriously, take a look at what you're proposing.
No, not even close. The slot interface hardly matters for software and isn't that big a deal even for motherboard designers - adding a power connector is an insignificant detail. However the internal design of GPUs has changed massively over that time, far faster than the rate at which CPU design has changed. The early cards were simple rasterisers. The addition of progressively more geometry processing (the famous 'transform and lighting' of the first GeForce), then progressively more parameters in the rendering functions (and options in the pipeline), then full programmability in all its various incarnations... frankly it's been a whirlwind of change. Switching to something like Larrabee at this point is no more of a dislocation than the original switch from fixed to programmable pipelines.Since 3D graphics took off in a big way in 1996, about the biggest changes we've seen in the basic graphics card designs are the introduction of GPU processing, the changes in expansion slot design (PCI-AGP-PCIe), and maybe the use of add-on power connectors.
Rendering is a relatively small part of a games engine these days (significant, but developers can and do swap out rendering paths without affecting the majority of the engine code, never mind the game code). But yes, it's going to change for two reasons; squeezing more quality out of conventional triangle-based rasterisation is starting to reach dimminishing returns, and doing things like physics processing on a conventional GPU is essentially a nasty hack. The engines developers want to use, with ray-traced environments and powerful full-environment physics models, should be much easier to build on Larrabee and the AMD equivalent.You're predicting that within the next few years, the entire methodology of both graphics processor design and game engine design are suddenly going to completely change because... Intel wants that to happen?
Ah, but Nvidia is already losing the low end. If they are rendered irrelevant in the high end as well, what do they have left? A rapidly shrinking niche.Your entire argument smacks of technological determinism. Just because some new whizz-bang technology becomes available doesn't immediately mean everything that went before is suddenly going to become obsolete and get thrown in the bin.
Incidentally your argument sounds exactly what a Matrox executive would have said in 1997, when asked how they were going to match the 3DFX Voodoo. '3D graphics cards in every PC? Ridiculous technological determinism'. Of course by the time they woke up their own 3D efforts were too little too late and now the company is a shadow of its former self.
That's true. However there were three basic reasons for the Itanium's failure that hopefully don't apply here. Firstly the thing just sucked, it arrived late and never ran at a reasonable clock speed. Secondly the x86 compatability didn't work very well, nor did the compilers - programmers had to learn a whole new platform and port over all their application code and libraries. Thirdly it was very expensive. By comparison Larrabee is a well known IS with very mature tools (x86) and 3D engine code is already rewritten from scratch every two or three years - and it may well be cheaper than the discrete competition.That was Intel's thinking when they bought out the Itanium, which hasn't even been a serious success even in the supercomputer industry, where it was intended for - to say nothing of desktop computing.
I should note that the launch version of Larrabee will be on a discrete card for this reason, plus the fact that mass market motherboard manufacturers will take a while to switch to DP style boards. However three channels of DDR3-1333 on a Nehalem system already delivers a bandwidth of 32 GB/s. By the time Larrabee is out there will be dual socket i7 enthusiast platforms with six channels of DDR3-1600; that's 77 GB/s, nearly the bandwidth of a 260 (to far more actual memory of course, and with lower latency) even before overclocking.Prove it. As I stated above, we've had memory-sharing graphics chipsets for a decade now, and there will be no way any motherboard-integrated solution will match the memory performance of a dedicated card in the forseeable future.Specialised cache hierarchies and data pipelines will remain but the concept of specialised graphics memory (e.g. the GDDR series) is starting to die now and frankly, good riddance.
CPU-socket designs have three basic advantages. Firstly the latency of communication with the main CPU cores is much lower, which is nearly irrelevant for rendering but helps a lot with physics, AI and similar tasks that are currently being pushed toward GPUs. Secondly it allows direct access to system memory (and not a fixed partition of it the way current solutions work), which removes the nasty problem of managing content caching and streaming to the GPU and again helps with performance for many of the GP-GPU tasks. Thirdly it potentially allows for the elimination of PCI Express card slots altogether, for a cheaper and smaller PC. That's definitely going to happen at the low-end - to some extent with the transition from desktops to laptops it already has - but with so much functionality integrated onto the motherboard these days it's going to be a possibility even for mainstream 'gaming' machines.Why will it "probably" happen? Explain the benefit of this approach, and why it's likely to own current add-on board designs.
- Joviwan
- Jedi Knight
- Posts: 580
- Joined: 2007-09-09 11:02pm
- Location: Orange frapping county, Californeea
Re: Question about ATI and Nvidia.
Enigma wrote:What is their status currently? Who is the market leader? Are they stable or is one of them looking to collapse?
The reason I ask is that depending who you talk to, either Nvidia or ATI is about to bite the dust.
I'm planning to get a decent vid card in the $100 to $150 range but I don't wnat to get one from a company that will go bust.
As said earlier, both are relatively stable.
If you want the best bang for your buck, you'd probably be doing pretty darn well if you pick up a high end Radeon 3870; very low power consumption, enough juice for most games, and they're about a hundred bucks on new-egg for half a gig of video memory.
Drooling Iguana: No, John. You are the liberals.
Phantasee: So extortion is cooler and it promotes job creation!
Ford Prefect: Maybe there can be a twist ending where Vlad shows up for the one on one duel, only to discover that Sun Tzu ignored it and burnt all his crops.
Re: Question about ATI and Nvidia.
Integrated chips have been "good enough" for most games for a while now; it's only really Intel's sloppy efforts that have poisoned the well for IGPs. Nvidia's newest IGP that's used in the new Macbook can even play Crysis, albeit just barely.That market is slowly being strangled by integrated graphics processors which are getting better and better every generation. It's essentially a matter of time until integrated solutions become good enough for the vast majority of the market
Isn't latency going to be the killer in that scenario though? I suppose they could optimize the CPU and GPU's memory access though, get them timed better. I know that the shared memory model does work well in the Xbox 360, but that's a bit more of a controlled environment than a PC.Starglider wrote:Now however 3D graphics are reverting to the same trend 2D ones followed. Transistor budgets for CPUs are now so ample that we're facing core inflation - and it's hard to make a credible argument that consumers need more than four cores (it's hard enough to argue for four right now). DDR3 is so fast, particularly in triple-channel i7 configuration, that specialised memories are marginally useful.
I thought the big limitation there was actually that GPU physics are relatively new, and the designers haven't had time to properly accommodate it? It'll be interesting to see what happens when the mid-range Core i7 shows up, since that'll have a direct PCIe link from the GPU to the CPU, which could provide a good preview of the future direction of graphics processing.New applications of GPU-style hardware, e.g. physics, are exposing the limitations of hanging the co-processor off a PCI bus on the other side of a bridge, instead of directly connecting it to the processor.
Suppose it depends how well they adapt - even if Larrabee is a major success, it'll take a few years for the industry to change around it.Nope. Nvidia is going to be squeezed at the low end regardless; in fact it's already happening, as shown by their rapidly declining profits. However if Larrabee fails then they have a chance to survive at the high end, if they can keep product design costs under control. If it succeeds they're in real trouble.
I was actually talking about 3D board design, not 3D chip design; I should probably have made that distinction clear. But having said that, Larrabee's internal architecture is from what I understand a much bigger jump than any of those things you mention.No, not even close. The slot interface hardly matters for software and isn't that big a deal even for motherboard designers - adding a power connector is an insignificant detail. However the internal design of GPUs has changed massively over that time, far faster than the rate at which CPU design has changed. The early cards were simple rasterisers. The addition of progressively more geometry processing (the famous 'transform and lighting' of the first GeForce), then progressively more parameters in the rendering functions (and options in the pipeline), then full programmability in all its various incarnations... frankly it's been a whirlwind of change. Switching to something like Larrabee at this point is no more of a dislocation than the original switch from fixed to programmable pipelines.
Fair enough if we're talking about ray-tracing; I know that current GPUs fail most epically when it comes to that. It's gonna depend on whether Larrabee's performance is up to the job I suppose, which is something we can't really currently predict, even if it is easy to develop for. Its performance in current games will also be a big factor, since I can't really see Intel pulling off 3DFX's dual board trick if Larrabee turns to either suck in or not be compatible with current games.Rendering is a relatively small part of a games engine these days (significant, but developers can and do swap out rendering paths without affecting the majority of the engine code, never mind the game code). But yes, it's going to change for two reasons; squeezing more quality out of conventional triangle-based rasterisation is starting to reach dimminishing returns, and doing things like physics processing on a conventional GPU is essentially a nasty hack. The engines developers want to use, with ray-traced environments and powerful full-environment physics models, should be much easier to build on Larrabee and the AMD equivalent.
I wouldn't say they're completely impotent yet, considering they managed a pretty big design win with the new Macbook range. Their X86 licence problems could also be solved if their muchly-rumoured takeover of Via ever goes ahead.Ah, but Nvidia is already losing the low end. If they are rendered irrelevant in the high end as well, what do they have left? A rapidly shrinking niche.
Matrox were never that dumb - from what I remember they pretty much marketed the G200 as the perfect 2D companion to a Voodoo 2 and put all their efforts into their next chip, the G400 which was about the fastest pre-GeForce graphics chip (not counting that ridiculous dual Rage 128 board ATI came up with). Why they gave up the ghost after that is a different matter, but I don't think they were ever that seriously ignorant of the future of 3D graphics.Incidentally your argument sounds exactly what a Matrox executive would have said in 1997, when asked how they were going to match the 3DFX Voodoo. '3D graphics cards in every PC? Ridiculous technological determinism'. Of course by the time they woke up their own 3D efforts were too little too late and now the company is a shadow of its former self.
It was only the original release that was a really bad performer - the second was actually one of the fastest supercomputer chips you could get on its general release, it just wasn't a huge success. The whole X86 compatibility thing was kind of a joke anyway, I think the only reason it ever got included was because people expected it from a chip made by Intel.That's true. However there were three basic reasons for the Itanium's failure that hopefully don't apply here. Firstly the thing just sucked, it arrived late and never ran at a reasonable clock speed. Secondly the x86 compatability didn't work very well
May well be... still, it's a pretty big shift from the designs that people have been used to for the last decade. I don't think we'll see immediate benefits overnight.nor did the compilers - programmers had to learn a whole new platform and port over all their application code and libraries. Thirdly it was very expensive. By comparison Larrabee is a well known IS with very mature tools (x86) and 3D engine code is already rewritten from scratch every two or three years - and it may well be cheaper than the discrete competition.
Re: Question about ATI and Nvidia.
As for x86 graphics, MS has already started working on it. The first example is WARP10, available in Win7. Targets the same DX10 code that already exists, it's just ridiculously slow (as in, as fast on a 8 core i7 as on integrated graphics). However, toss more, faster cores at it, and it'll come up a lot.
"preemptive killing of cops might not be such a bad idea from a personal saftey[sic] standpoint..." --Keevan Colton
"There's a word for bias you can't see: Yours." -- William Saletan
"There's a word for bias you can't see: Yours." -- William Saletan