Ziggy Stardust wrote:Why exactly is the hardware so horrible on the PS3? I mean, do they just use shitty components or what?
The GPU on the Xbox360 was bleeding edge when designed (in 2004). It was one of the very first mass market unified shader designs, based on the first iteration of the very successful ATI VLIW5 architecture (that was used in the Radeon 2xxx, 3xxx, 4xxx, 5xxx and 6xxxx series). Unified shaders are easier to program and ensure that you can always use all of the GPU power for any task. The Kinect was only possible because forward-looking GPU design allowed GPGPU gesture processing. The unified memory again makes programming much easier, as there is no static partition between GPU-accessible and CPU-accessible RAM or (for non-XNA games) slow transfers betweeen the two. The embedded DRAM daughter die was a very smart way to greatly increase effective GPU power (for typical late 2000s game engines) at low cost, by unloading framebuffer access from the main RAM interface and z-cull/blending from the main GPU. The GPU even has a tesselator, six years ahead of general adoption of tesselation in PC games, although it was only used in a few exclusives. The CPU was somewhat less innovative but still pretty decent; triple core at a time when dual core was just starting to appear in consumer PCs, the high-clocked PowerPC cores had a better MIPS/transistor ratio than x86, were still relatively easy to program and had a fast but straightforward vector instruction set.
The Cell on the other had is worthless crap, one decent CPU core and six extremely limited and hard to program ones (one core was dead for manufacturing yield reasons and one reserved for the OS). The whole SPU concept was a stupid compromise between CPU and GPU that did neither well, and was completely eclipsed by fully programmable GPUs. The PS3 was originally supposed to have two Cells, but when it became obvious that performance would be laughable Sony grabbed whatever they could get cheap and integrate quickly from Nvidia and shoved it in as a GPU. That turned out to be a pre-obsolete G70, at the tail end of the development of the classic partially programmable split vertex/pixel shader architecture (about to be replaced by an all new integrated design in the GeForce 8xxx series). Sony were too rushed and/or incompetent to implement an integrated memory architecture, so the console was left with a PC-style split system RAM / VRAM. That's fine on PCs because they have several times more total memory than a console, on the PS3 it severely restricted texture size and deferred rendering techniques.
So in short using bleeding edge tech and making custom hardware can be very beneficial if you do it well (e.g. X360 GPU choice and eDRAM setup), bad if you screw up (Cell, PS3 GPU choice).