DaveJB wrote:We've had integrated graphics chipsets since the Intel 810 in 1999 (or even earlier - Cyrix were offering their MediaGX CPU with on-board graphics back in 1996).
Good example actually. Originally, all PC graphics (other than character mode) required huge plug-in cards (I remember the original VGA cards being monstrous double-stacked PCBs - something we didn't see again until very recently). It steadily migrated onto the chipset, to the point that by 2000 almost no one bothered buying a graphics card just for 2D graphics. The MediaGX even started to migrate graphics onto the CPU package for cost saving on low end systems. The only thing that saved discrete graphics was the sudden introduction of hardware 3D acceleration (formerly the domain of expensive workstations) into the PC and particularly gaming space. Initially the rate of progress was extreme, leaving cheap integrated solutions in the dust. CPUs didn't have the transistor budget do to it on chip and even chipsets had problems because standard DRAM sucked compared to the fast specialist DRAM on the graphics cards. It was harder to keep business users buying discrete graphics, but the threat of Vista and its 'need' for 3D acceleration kept it going a little longer.
Now however 3D graphics are reverting to the same trend 2D ones followed. Transistor budgets for CPUs are now so ample that we're facing core inflation - and it's hard to make a credible argument that consumers need more than four cores (it's hard enough to argue for four right now). DDR3 is so fast, particularly in triple-channel i7 configuration, that specialised memories are marginally useful. New applications of GPU-style hardware, e.g. physics, are exposing the limitations of hanging the co-processor off a PCI bus on the other side of a bridge, instead of directly connecting it to the processor.
We've had basic gaming level integrated chipsets since nVidia introduced the nForce series back in 2001 or 2002. And yet the market for graphics boards, even the ones at low end, still exists.
But it has been growing at a rate considerably slower than the growth of PC hardware sales as a whole, and may soon start to decline.
So your entire argument is really predicated on the fact that Intel could turn the graphics industry upside-down with Larrabee.
Nope. Nvidia is going to be squeezed at the low end regardless; in fact it's already happening, as shown by their
rapidly declining profits. However if Larrabee fails then they have a chance to survive at the high end, if they can keep product design costs under control. If it succeeds they're in real trouble.
That's really a pretty bold prediction, considering that it took the industry years to go from having no 3D graphics at all in the pre-3DFX era until the point where a 3D card was mandatory (about 1999 for most games, IIRC).
So what? Voodoo was a speciality card for gamers. Software rendering was nearly as good to start with, workstation stuff was its own world. It took years to build demand because people had to be sold on the whole concept of a 3D card. Larrabee isn't being pushed by some tiny startup; it's being rolled out by the giant that virtually dictates the shape of the PC industry. Intel
can certainly fail, e.g. RDRAM and Itanium, but those were products that fundamentally sucked. By contrast Larabee is looking good so far, lots of 3D engine designers are excited about it, it's selling into an established market and Intel has plenty of pricing power to make it attractive at launch.
Seriously, take a look at what you're proposing.
I'm not proposing it. Intel (and to a lesser extent AMD) are proposing it, and quite a few expert programmers are saying 'wow, just what we always wanted'. Understandably.
Since 3D graphics took off in a big way in 1996, about the biggest changes we've seen in the basic graphics card designs are the introduction of GPU processing, the changes in expansion slot design (PCI-AGP-PCIe), and maybe the use of add-on power connectors.
No, not even close. The slot interface hardly matters for software and isn't that big a deal even for motherboard designers - adding a power connector is an insignificant detail. However the internal design of GPUs has changed massively over that time, far faster than the rate at which CPU design has changed. The early cards were simple rasterisers. The addition of progressively more geometry processing (the famous 'transform and lighting' of the first GeForce), then progressively more parameters in the rendering functions (and options in the pipeline), then full programmability in all its various incarnations... frankly it's been a whirlwind of change. Switching to something like Larrabee at this point is no more of a dislocation than the original switch from fixed to programmable pipelines.
You're predicting that within the next few years, the entire methodology of both graphics processor design and game engine design are suddenly going to completely change because... Intel wants that to happen?
Rendering is a relatively small part of a games engine these days (significant, but developers can and do swap out rendering paths without affecting the majority of the engine code, never mind the game code). But yes, it's going to change for two reasons; squeezing more quality out of conventional triangle-based rasterisation is starting to reach dimminishing returns, and doing things like physics processing on a conventional GPU is essentially a nasty hack. The engines developers want to use, with ray-traced environments and powerful full-environment physics models,
should be much easier to build on Larrabee and the AMD equivalent.
Your entire argument smacks of technological determinism. Just because some new whizz-bang technology becomes available doesn't immediately mean everything that went before is suddenly going to become obsolete and get thrown in the bin.
Ah, but Nvidia is already losing the low end. If they are rendered irrelevant in the high end as well, what do they have left? A rapidly shrinking niche.
Incidentally your argument sounds exactly what a Matrox executive would have said in 1997, when asked how they were going to match the 3DFX Voodoo. '3D graphics cards in every PC? Ridiculous technological determinism'. Of course by the time they woke up their own 3D efforts were too little too late and now the company is a shadow of its former self.
That was Intel's thinking when they bought out the Itanium, which hasn't even been a serious success even in the supercomputer industry, where it was intended for - to say nothing of desktop computing.
That's true. However there were three basic reasons for the Itanium's failure that hopefully don't apply here. Firstly the thing just sucked, it arrived late and never ran at a reasonable clock speed. Secondly the x86 compatability didn't work very well, nor did the compilers - programmers had to learn a whole new platform and port over all their application code and libraries. Thirdly it was very expensive. By comparison Larrabee is a well known IS with very mature tools (x86) and 3D engine code is
already rewritten from scratch every two or three years - and it may well be cheaper than the discrete competition.
Specialised cache hierarchies and data pipelines will remain but the concept of specialised graphics memory (e.g. the GDDR series) is starting to die now and frankly, good riddance.
Prove it. As I stated above, we've had memory-sharing graphics chipsets for a decade now, and there will be no way any motherboard-integrated solution will match the memory performance of a dedicated card in the forseeable future.
I should note that the launch version of Larrabee will be on a discrete card for this reason, plus the fact that mass market motherboard manufacturers will take a while to switch to DP style boards. However three channels of DDR3-1333 on a Nehalem system already delivers a bandwidth of 32 GB/s. By the time Larrabee is out there will be dual socket i7 enthusiast platforms with six channels of DDR3-1600; that's 77 GB/s, nearly the bandwidth of a 260 (to far more actual memory of course, and with lower latency) even before overclocking.
Why will it "probably" happen? Explain the benefit of this approach, and why it's likely to own current add-on board designs.
CPU-socket designs have three basic advantages. Firstly the latency of communication with the main CPU cores is much lower, which is nearly irrelevant for rendering but helps a lot with physics, AI and similar tasks that are currently being pushed toward GPUs. Secondly it allows direct access to system memory (and not a fixed partition of it the way current solutions work), which removes the nasty problem of managing content caching and streaming to the GPU and again helps with performance for many of the GP-GPU tasks. Thirdly it potentially allows for the elimination of PCI Express card slots altogether, for a cheaper and smaller PC. That's definitely going to happen at the low-end - to some extent with the transition from desktops to laptops it already has - but with so much functionality integrated onto the motherboard these days it's going to be a possibility even for mainstream 'gaming' machines.