Page 1 of 2

What the fuck is wrong with videocard names?

Posted: 2006-10-11 04:39pm
by Darth Wong
Seriously, shouldn't it be reasonable to expect that if the number is higher, then the performance should be higher? If I'm going through a computer store and I see a Radeon X1300 Pro next to a Radeon X800 GTO, wouldn't it be perfectly reasonable of me to assume that the X1300 Pro is significantly faster, rather than being significantly slower?

What the fuck is up with these model numbers? NVidia does the same thing. A 6200 might be slower than a 5500, for example.

Posted: 2006-10-11 04:43pm
by Fingolfin_Noldor
The 2 companies follow a silly convention where the first digit signifies the series number. Then they make things worse by using a variety of alphabets to signify how fast this card is.

The Radeon 9000 series was particularly bad. They had a blizarre 9100, 9200, 9500, 9600 (9500 is actually faster), 9700, 9800. They have improved somewhat but now a shitload of XTX, XT, Pro, GT, GTX and what bloody not. Typically, now, the 3 companies maintain 3 categories of performance per series. It's x1300 and 7300 for entry level for ATI and Nvidia, x1600 and 7600 for midrange, and x1800/x1900 and 7800/7900 for high end. Note though that there was a huge jump in performance between x1800 and x1900, and similarly for 7800 and 7900.

It takes time to work one's way round. Try the online computer stores.. might be easier to navigate.

Posted: 2006-10-11 05:21pm
by Nephtys
For the Radeons, I think the new set's been like so...

Last Generation Low/Med/High End: X300/X600/X800.
This Gen Low/Med/High End: X1300, X1600, X1800.

As with NVidia...

Last Gen Med/High End: 6600GT/6800GT
New set: 7600GT/ 7800Gt.

The first number is now a generation desigor, it seems. I'm relatively new to this whole deal too. :P

Posted: 2006-10-11 05:24pm
by FSTargetDrone
It's like names given to automobiles. Fancy looking labels that are meaningless.

Posted: 2006-10-11 05:26pm
by Elessar
From my brief stint at ATI, I noticed that internal naming conventions for GPU's actually made sense in comparison to the marketing nonsense. Part of it is probably because there are a number of different cards aimed at different markets, but apparently for business reasons, they decided against differentiating this in any meaningful way.

Whereas 9800 was a full-featured gaming card for the PC, stuff like the 9200 was aimed for small size, low power-consumption and anything else that was desired in a mobile chipset. It was ridiculous, fellow employees had trouble recalling the relative differences between model lines.

Now that the megahertz myth is pretty much debunked, aren't CPU's starting to adopt a confusing nomenclature as well? I recall some laughter about Pentium Extremes or what not.

Posted: 2006-10-11 05:27pm
by Mr Bean
They fucking had it with Geforce 4 and pissed it away.

It was fucking perfect. Want the expensive Preformance Beast? Fine it's a Gerforce X-GTX, want the durable preformer? Get the Geforce X GT, and want the poor man's card? Get that $99.99 Geforce MX card. And of course the rich man's card which cost twice as much as the GTX for 6% more preformance? Why get the Geforce X-GTX ULTRA, it's even got ULTRA in it's name!

It was @#$@ perfect! And then two generations later we had GL's, GFTX's, Super duper fucking MX2's and all sorts of other shit.

Posted: 2006-10-11 05:39pm
by Fingolfin_Noldor
Part of the problem was that they had a propensity of designing 2 sets of graphic cores at the same time. So after x1800/7800 came out, they threw in x1900/7900.

Then comes the fact that they do a statistical distribution on the cores, and start deactivating the pipelines/shaders and whatever feature they had on the chips that weren't of high quality.

Next, they made adjustments to the clock speed and the memory speed. So then they start to divide the whole lot and give different names.

Then they made things worse with new changes to the existing cores etc. and now you even have 7950 GTX (which is actually 2 boards smack together) and x1950 GTX.

Posted: 2006-10-11 05:43pm
by Ace Pace
You also have many brand names making up their own letter designations to 'seperate' their own cards from the masses. Nevermind brand name is apprently enough to throw makers like BFG ahead of the pack regardless of using OCed cores or not.

Posted: 2006-10-11 05:46pm
by Beowulf
The two GPU card is the 7950GX2, not to be confused with the 7950GT, which is actually slower than the 7900GTX.

Posted: 2006-10-11 05:47pm
by Netko
Fingolfin_Noldor wrote:... 9200, 9500, 9600 (9500 is actually faster), 9700, 9800.
While I detest all the marketing bullshit surrounding graphics cards (the below 9500 series cards and Geforce 4 MX cause particular ire since they are techincly part of older generations while having names that put them in current generations -> cue howling of uninformed customers on various games' forums where they feel betrayed by the developer), the 9600 on today's games is faster then the 9500 but at the time of its introduction it was slower. This is because the 9500 is IIRC slightly faster clocked and thus had a performance advantage in DX7 games that were the standard in the day. Today however DX9 games that use shaders are the standard and the 9500's shader unit is somewhat crippled by a very low instruction per pass ability, which is obscured by the drivers but exists while the 9600 has a much better ability. I can't recall the exact statistic but I belive the diffrence was 20-30% in favor of the 9600 in shader intensive games, while the 9500 enjoyed a 10% lead in non-shader games.

Posted: 2006-10-11 05:50pm
by Ypoknons
I've always loved Anandtech video cards guides. The video card branding would probably get a F in marketing these days...

Posted: 2006-10-11 05:52pm
by Ace Pace
The reason for the 9500 being faster was both a faster clock, and the fact it was basicly a cut down 9700 core and could use all of it's features, and be VERY easily soft modded into a 9700 Pro.

Therfor, the 9500 gained a reputation as a very very good card, while infact, unmodded, it was roughly equivilent to the 9600.

After the 9500, neither nVidia or ATi ever let a card go into the market soft moddable. Untill the same thing happened to ATi with the X800GT0^2(don't ask) which could apprently be modded to a X800XT without touching hardware.

Posted: 2006-10-11 05:59pm
by Arthur_Tuxedo
It's really quite ridiculous, and I'm willing to bet that if someone bothered to do a study, they'd find that it hurts sales. Any intelligent person can come up with much better naming systems than the drivel we've seen in the last few years.

Posted: 2006-10-11 06:28pm
by General Zod
It's about as bad as the latest processor naming conventions. You'll get chips rated at 3000, 4000, 2400, etc. that gives you virtually no frakking clue as to their actual clock speed. A 3000 can be anywhere from 2.5 ghz to 3+ ghz, and so on which makes it infuriating to tell what it is at a glance. It would be nice if there were a universal naming convention for chipsets in general (graphics, cpus, etc) that was easy to decipher.

Posted: 2006-10-11 07:07pm
by Count Dooku
I bought my GPU and CPU based off of several Bench Mark scores...To be completely honest, I don't have one company that I favor over another.

Posted: 2006-10-11 08:10pm
by Uraniun235
General Zod wrote:It's about as bad as the latest processor naming conventions. You'll get chips rated at 3000, 4000, 2400, etc. that gives you virtually no frakking clue as to their actual clock speed. A 3000 can be anywhere from 2.5 ghz to 3+ ghz, and so on which makes it infuriating to tell what it is at a glance. It would be nice if there were a universal naming convention for chipsets in general (graphics, cpus, etc) that was easy to decipher.
Why do you need to know the clockspeed? I'm pretty sure the AMD system is actually one of the sanest; a 3200+ is pretty well definitely going to be superior to a 3000+.

Things get a bit dodgier when you try and compare gaming performance between A64 and X2 chips, but even within the X2 lineup bigger = better.

Posted: 2006-10-11 08:26pm
by General Zod
Uraniun235 wrote:
General Zod wrote:It's about as bad as the latest processor naming conventions. You'll get chips rated at 3000, 4000, 2400, etc. that gives you virtually no frakking clue as to their actual clock speed. A 3000 can be anywhere from 2.5 ghz to 3+ ghz, and so on which makes it infuriating to tell what it is at a glance. It would be nice if there were a universal naming convention for chipsets in general (graphics, cpus, etc) that was easy to decipher.
Why do you need to know the clockspeed? I'm pretty sure the AMD system is actually one of the sanest; a 3200+ is pretty well definitely going to be superior to a 3000+.

Things get a bit dodgier when you try and compare gaming performance between A64 and X2 chips, but even within the X2 lineup bigger = better.
It's an old habbit. I'm used to rating computer performance and whether or not I want to go for the chip based on that. Ever since they switched to this new system it's been confusing the hell out of me.

Posted: 2006-10-11 08:51pm
by RedImperator
FSTargetDrone wrote:It's like names given to automobiles. Fancy looking labels that are meaningless.
Yeah, but at least car names are memorable. "Corvette" and "Camary" are meaningless, but I have a general idea what the difference is between them, and the technical data is easier to remember when it's attached to a name instead of a series of letters and numbers.

Posted: 2006-10-11 09:42pm
by AniThyng
General Zod wrote: It's an old habbit. I'm used to rating computer performance and whether or not I want to go for the chip based on that. Ever since they switched to this new system it's been confusing the hell out of me.
But you realise of course, that it's equally meaningless to pit a 1.66 GHz Athlon XP against a 1.66GHz P4, or against a 1.66GHz Core 2 Duo...

Edit to save a post: It goes beyond obscure video card series number/letter conventions - Like the 512MB x1300. The uninformed will see 512 MB video ram and think it's great when of course an x1300 is barely capable of using 64MB properly, nevermind 512.

Posted: 2006-10-11 10:00pm
by Arthur_Tuxedo
Not equally meaningless, much more meaningless. CPU naming may have its foibles, but it's a shining beacon of righteousness compared to GPU naming.

Posted: 2006-10-11 10:20pm
by General Zod
AniThyng wrote:
But you realise of course, that it's equally meaningless to pit a 1.66 GHz Athlon XP against a 1.66GHz P4, or against a 1.66GHz Core 2 Duo...
Obviously. But when you're familiar with how the chips work, knowing the clockspeed and bus gives you a fairly good indication of how they'll stack up against other chips, as well as the core design (single core, dual, etc). But some meaningless number that simply suggests a range is worthless.

Posted: 2006-10-11 10:23pm
by Stark
For a moment I thought this thread would be about the retarded third-party named cards, or the whole 'poorly rendered titties' boxart. :)

I mean, who doesn't want an Sapphire Xtreme Masochistic with some circa 2001 cg tits on the box?

Posted: 2006-10-11 10:37pm
by Uraniun235
General Zod wrote:
AniThyng wrote:
But you realise of course, that it's equally meaningless to pit a 1.66 GHz Athlon XP against a 1.66GHz P4, or against a 1.66GHz Core 2 Duo...
Obviously. But when you're familiar with how the chips work, knowing the clockspeed and bus gives you a fairly good indication of how they'll stack up against other chips, as well as the core design (single core, dual, etc). But some meaningless number that simply suggests a range is worthless.
I don't think the product name necessarily needs to include the clockspeed though, because that can invite things like someone seeing a 2.4GHz P4 and a 2.0 GHz A64 and thinking the P4 is the superior chip, when in fact the A64 will blow it away.

Honestly, if you know enough to be able to recognize that comparing clockspeeds between two different architectures is next to meaningless, you ought to be looking up benchmarks for the chips on the internet so you can get an even better sense of differences in chip performance beyond "well this chip is 200MHz faster".

I do think Intel really stumbled with the whole Core Duo -> Core 2 Duo thing. That's just crazy.

Posted: 2006-10-11 10:52pm
by Neko_Oni
Stark wrote:For a moment I thought this thread would be about the retarded third-party named cards, or the whole 'poorly rendered titties' boxart.
I'm glad someone else brought this up. What the hell is with the "cover art" of graphics cards? Most of it looks worse than the graphics the card inside could render in real-time.

Posted: 2006-10-11 11:17pm
by Darth Quorthon
Uraniun235 wrote:Why do you need to know the clockspeed? I'm pretty sure the AMD system is actually one of the sanest; a 3200+ is pretty well definitely going to be superior to a 3000+.
That reminds me of a conversation I had with a guy at my work. He was looking to buy a new computer, and his choices were between a single-core P4 at 3.6GHz and a dual-core at 3.2Ghz. I told him to go for dual-core, and he said, "but it's 400MHz slower". It's fun explaining to a not-so-computer-savvy fellow that the clockspeed wars are over. I've often privately wondered if in the past if system builders were touting clockspeed over actual performance to take buyers for a ride. I remember a couple of years ago when the gulf between Intel and AMD clockspeeds was nearly 1 GHz and Intel said that AMD couldn't "keep up", and a guy from AMD said "I'm not interested in clockspeeds, I'm interested in performance".

How did we ever make decisions on what hardware to buy before all of these benchmarking sites appeared? :)