3D Graphics Cards: The EssentialsAug 19, 2000 (Updated Jan 8, 2001) Write an essay on this topic.
There have been some great articles on upgrading to a new graphics card (AKA "3D card" or "video card"), but many of them ignore the fundamental question: what do you really have to gain by upgrading? Or, an even more fundamental question: how exactly do graphics cards "increase performance", and what can you expect by having a more powerful card?
First thing to do is toss out the old cliche' that "a PC is only as fast as its weakest component". Not only is it a gross oversimplification of how computers work, but it's such a misleading statement that it merits an epinion all by itself. It's definitely flat-out wrong when it comes to graphics cards, since even the slowest cards on the market these days are fast enough to keep your PC running just fine.
The next semi-cliche' to toss out: that a newer, faster graphics card will make your PC "run faster" or "increase performance". It bothers me to hear so many people toss around these terms without clarifying "faster at what"? and "increase performance at which tasks"? It's then not all that surprising to hear that people think faster video cards "increase internet speed" or "increase program load times"; in other words, tasks that have nothing to do with the power of your video card. Or I'll often hear people complain that their games are running too slow due to their 3D card, when it's obviously something else that's the cause of their problem (ex. slow CPU, too little RAM).
And then, perhaps the most important cliche to throw out the window (for your pocketbook, at least), which is also the single most misleading and idiotic cliche in PC hardware history (perpetuated ad infinitum by computer salespersons and hardware company PR departments, no doubt):
"You should buy the fastest, most powerful graphics card (or any other PC component) you can possibly afford. Since PC parts become obsolete so quickly, it will protect your investment in the long run."
Let's just say that this is the absolute antithesis of the truth, and I'll leave it at that. That plus I've written another article dealing with this very subject ;-) (if interested, it's my newest once called "Throw that Conventional Wisdom out the window!!")
So what exactly DOES a faster graphics card do to increase performance? It helps in two ways: 2D acceleration, and 3D acceleration.
Anytime you're using Windows, your graphics card is handling the tasks of drawing the images to the screen. Let's say you're downloading a web page. Think of it like an assembly line, or a pipeline: first the data has to come through the modem, then into system RAM, then from RAM into your graphics card, which sends the image to the monitor. Not only is the graphics card the final step in the "pipeline", but it's also the quickest; you can wait ten seconds for a web page to download, but then it only takes a few milliseconds for the graphics card to do its part and draw the image to the screen. A good way to see the graphics card at work is to scroll up and down a big web page: the smoothness of the scrolling is determined by how fast your graphics cards' 2D acceleration is. Or you can minimize and maximize windows (you minimize by clicking the "-" icon in the upper-right corner, and maximize by clicking on the window on the task-bar at the bottom of the screen). The faster your graphics card, the faster the images will be drawn to the screen. Of course, like the previous example, it's such a fast process anyway that the speed difference is hardly noticeable.
Which brings me to my next point: any modern graphics card ("modern" meaning it was produced in the last year or so) is already near the limits of how fast 2D acceleration can possibly be. If you have a TNT, TNT2, Banshee, Voodoo 3, Matrox G200, or anything faster, the limits have pretty much already been reached, and no matter how fast graphics cards become they will never be noticeably faster at 2D tasks. Will the differences show up on benchmark tests? Sure. But it almost makes me laugh when I see web sites do benchmark tests, and the 1-2% difference from one card to the next is somehow supposed to be significant. In reality, it's like the difference between 10 milliseconds and 11 milliseconds.
The real difference from one card to the next, as far as 2D is concerned, is image quality. Again, the differences between *most* cards is very slight, and would only be noticeable if you did a side-by-side comparison. However, certain cards--most notably the Matrox line of G200 and G400 cards--are known for really exceptional image quality, especially for people who use their computers for graphic design and image editing. Nvidia's Geforce line of cards are also known for excellent image quality.
Is memory size significant from one card to the next? Sure, but only to a point. Even 4 meg video cards are capable of 32-bit (true color) graphics at 1152x864 resolution, and to reach insane resolutions like 1600x1200 at 32-bit color you would only need an 8 meg card (and such resolutions would only be possible with a high-end 20+ inch monitor anyway). Note that these figures are referring to desktop (2D) resolution only. Generally speaking, memory requirements at any given resolution will be greater in 3D games if you want to play games at those same resolutions (more on that below). As 2D is concerned, however, anything more than 8 megs is overkill.
(Note: if you're curious about how to calculate memory requirements at any desktop resolution/color-depth, just multiply the two numbers, which will give you the total pixels on screen, then multiply that number by the color depth. For example, 640x480 resolution has 307,200 pixels on screen at all times. At 32-bit color depth, 307,200 times 32 is about 9.8 million bits of memory required. To convert that number to megabytes, divide by 8 to get the number of bytes, then divide by 1024. In this example, the memory requirement for 640x480 32-bit is exactly 1.2 megs).
3D is also a commonly misunderstood aspect of graphics cards, since many people think the 3D card is the only factor that's responsible for 3D graphics. In reality, your CPU is in many ways even more important. But I thought 3D graphics cards were supposed to eliminate the strain on the CPU when doing 3D graphics, you may ask? Not exactly. In the days of 2D-only acceleration, 100% of the workload was put on the CPU for 3D games like Doom or Duke Nukem. These days, with 3D acceleration, the split is more like 50/50, 60/40, something like that, depending on how powerful your CPU is.
As anybody who has played the latest 3D games like Unreal Tournament or Deus Ex has seen, frame-rates can go through huge swings during gameplay, from blazingly fast one second to rock slow the next. If you're just walking around simple corridors, you'll get liquid-smooth, 60+ fps frame rates. But as soon as a bunch of enemies appear on screen, those smooth framerates can come to a screeching halt, dipping down to 10-20 fps in some cases. The difference is, in the 2nd scenario your CPU is being pushed to the max by having to calculate the geometry and AI (artificial intelligence), among other things, and in the 1st case your CPU isn't being strained at all.
A simple but very accurate rule-of-thumb: your 3D card determines the maximum frame-rates you get in 3D games, but your CPU is responsible for the minimum frame-rates. What that means is, if you're unsatisfied with your 3D card's performance because it's slow at all times during gameplay, then your 3D card is most likely the limiting factor. But if the problem is just periodic slowdowns in only certain situations, you probably need to upgrade the CPU before the graphics card.
The term that's commonly used to describe 3D cards is "fill-rate", which is basically a measure of the maximum frame rates (expressed in "pixels per second") a video card is capable of (you may have also heard of the term "texels per second", which is a pretty meaningless statistic for current graphics cards since it's a completely theoretical number). You will also often hear the term "fill-rate limited"; what that refers to is when you're playing a game at a high enough resolution that the card's fill-rate has been reached. The implication of that is pretty ironic and counter-intuitive: with slower CPU's, you should run games at a HIGHER resolution so that you can achieve fill-rate limited resolutions. For example, if your CPU is only capable of maintaining 10-25 fps, you can increase the resolution to the point where your CPU's fill-rate reaches about 30 fps without noticing a drop-off in performance, since your CPU will still be the primary bottleneck.
As far as RAM is concerned, 3D graphics are different than 2D primarily because 3D textures (the bitmaps that are applied to 3D objects to make them appear realistic) as well as the frame buffer (the actual screen image where frames are stored before being sent to the screen) must be stored in RAM. In 2D, the entire video RAM is basically used as a "frame buffer", since textures aren't used in 2D. For most of the latest 3D games, at least 16 megs of video RAM is needed for good performance, and 32 megs is needed if you're playing games at higher resolutions (1024x768+) and 32-bit color.
Also, if you happened to read the part about calculating memory requirements for 2D earlier, you can use that same method to calculate the 3D frame buffer but you must double the final amount, because most 3D games use a method called "double buffering". It basically means you have two entire frames in RAM at all times, which alternatively are drawn to the screen, resulting in a smoother image. Honestly though, I wouldn't go crazy calculating all kinds of different resolutions, color depths, and such. As long as you've got 32 megs you're fine, and 16 megs is OK for 16-bit color (especially for cards such as Voodoo 3's, which are incapable of 32-bit color).
Upgrading You Graphics Card
Ever since the first 3dfx Voodoo cards were released, graphics cards have continued to progress through several "generations". The next version of a company's graphics card will be released that's faster than anything else on the market, then other companies will release similarly-performing cards to compete. Those cards will then define the "current generation" of graphics cards. Then, about six months later, companies will typically release a revision of their graphics card that's based on the same design, but faster and possibly with a few minor extra features. Interestingly, with a few exceptions, the "generations" metaphor has been pretty accurate, especially with nVidia who has followed this plan to a T. To this day they continue to be the only graphics card company in history that, since the Riva 128, has never missed a single development cycle. About every 12 months they release a new generation, and six months later they will release an "Ultra" version (which can be considered a "half-generation").
(NOTE: I tried making a chart below, but I can't seem to get tabs/indents to register. If anyone could tell me the HTML tag for an indent, it would be much appreciated).
3dfx nVidia ATI Matrox
1 Voodoo, Riva 128, Rage
1.5 Banshee, Riva 128zx
2 Voodoo 2, TNT, Rage Pro, G200
3 Voodoo 3, TNT2, Rage 128, G400
3.5 Voodoo 3 3500, TNT2 Ultra, Rage Fury
4 Geforce SDR, Rage Fury MAXX
4.5 Geforce DDR
5 Voodoo 5, Geforce 2 GTS, Radeon
5.5 Geforce 2 Ultra
Generally speaking, upgrading your graphics card is worthwhile if you're going up by at least 1.5 generations. That's how much of an increase in performance you would need for the difference to be worthwhile, and even upgrading every 1.5 is pretty much staying at the cutting edge of technology. For the casual gamer, even a Banshee or TNT is good enough for most games.
Of course, your choice of 3D card should also be determined by the power of the CPU. Getting the latest Geforce 2 will hardly be noticeable if your CPU is only a Pentium II 300. With such a weak CPU paired with such an insanely powerful graphics card, you'll always be CPU-limited, even at the highest resolutions. Here's a general guide to what 3D cards go well with which CPU's:
1st Generation: Pentium 166 and lower
2nd Generation: Pentium 200+, Pentium II 350 and lower
3rd Generation: Pentium II 400+, K6-2 400+
4th Generation: Athlon 500+, Pentium III 500+
5th Generation: Athlon 600+, Pentium III 600+
You may have noticed that this article doesn't really say a whole lot about specific graphics card models, or even companies for that matter. There's a few reasons for that (besides length!). For the casual computer user, ANY modern video card, no matter how cheap, will be almost as good as any other. As long as it has at least 8 megs of RAM and is one of the cards on the above chart, you're set. For the more advanced user, there are only two main options: cards by 3dfx or nVidia, the performance leaders in the industry and the cards that most 3D games are optimized for. For information about specific models, please see my "3dfx vs. nVidia" article.
UPDATE 11/16: In the time since this article was written, ATI has essentially displaced 3dfx at the "top of the heap" with their DDR Radeon cards, while nVidia has further established their position as the leaders in the graphics card industry (in both product performance and in market share). On the other hand, due to various developments over just the last 4-5 months 3dfx has clearly fallen behind in performance, as indicated by the recent cancellation of their Voodoo 5 6000 card (it has been licensed to Quantum for use as a workstation card only), and 3dfx may soon cease to exist as we know it. Be watching for my upcoming 3dfx vs. nVidia article for all the gory details.
UPDATE 1/8/01: 3dfx will soon cease to exist within the next few months as their assets are purchased by nVidia. All customer and technical support (and possibly driver support as well) will end on February 15, so it really doesn't make sense to buy any 3dfx products at this time.
Also, one last bit of advice, which I hinted at earlier: remember that graphics cards are just about tied with CPU's for being the one component that depreciates fastest in your entire PC. First generation Geforce cards, for example, cost almost $300 when they were released less than a year ago. Today they can be found for only $120, and still provide stellar performance with just about any game. Geforce 2 MX's (the "budget" version of the Geforce 2) can even be bought in that same price range. I see no reason to think that other state-of-the-art cards (Geforce 2 GTS DDR, Voodoo 5, Radeon) won't also cost about 1/3 the price a year from now.
Keep that in mind when considering the latest and greatest graphics card technologies; staying just behind the state-of-the-art "curve" can save hundreds of dollars in the long run, since the "old" models tend to see big mark-downs once the newer generation is released (plus, even cards 2 or 3 generations behind the current one are still just fine for most games).
Good luck with your graphics card purchasing decisions, and please feel free to ask if you have any questions. I'll be more than happy to answer them.
Thanks for reading,
|Read all comments (11)|Write your own comment|