Technical Desc.

Rendering Realities:

A Chronicle of Graphics Card Advancements

Al Narcida

 

 

What does a graphics card do?

  • A graphics card (also known as a video card or GPU) is a piece of computer hardware that generates and displays images, videos and animations on a screen, working with the central processing unit (CPU) to process and display visual information. Graphics cards are usually the most complex and powerful component of a computer system, making visuals in games, videos and graphics-heavy programs look better.

 

Early Beginnings

  • The first 3D graphics workstation was developed by Jim Clark in the early 19th century, and the first 3D graphics system was the IRIS 1400 in 1984. Second generation systems raised the real-time level by rendering polygons that were Gaud-shaded, Phong-lit, and z-buffered, and improved raw transformation and scanning performance. Third generation systems, such as the RealityEngine, added texture mapping and full-scene antialiasing. (Dan Baum, 1998)

 

The First of its Kind

  • The term “GPU” has been in use since at least the 1980s. By promoting the GeForce 256 add-in board (AIB) (Fig. 1) as the first fully integrated graphics processing unit in history, Nvidia helped popularize it in 1999. In a single chip processor, it provided integrated lighting, rendering engines, triangle setup/clipping, and transformation.

Fig. 1 – NVIDIA GeForce 256 (credit – https://www.vgamuseum.info/)

 

The Competition

  • In the year 2000, ATI technologies (now known as AMD) introduced the R100 Series of graphic cards. This series, showcasing models such as the Radeon DDR, Radeon 64MB DDR, and Radeon 32MB DDR (Fig. 2), offered competitive performance and diverse memory configurations, challenging NVIDIA’s dominance in the graphics market. The Radeon R100 series not only showcased ATI’s technological prowess but also ignited a fierce rivalry between the two companies, driving innovation and advancements in visual computing for years to come.

 

 

Fig. 2 Radeon 32MB DDR GPU (credit – https://encyclopedia.pub/entry/33722)

 

 

Evolution of DirectX and Pixel Shaders

  • The early 2000s, DirectX (an application programming interface (API) that acts as a middleman between software and your PC’s hardware) continued to evolve, introducing new versions with expanded capabilities for graphics rendering.
  • NVIDIA released the GeForce 3 series in 2001, which introduced programmable pixel shaders, allowing developers to create more complex visual effects.
  • A Pixel Shader is a graphics function that calculates effects on a per-pixel basis (Nvidia, date unknown). They operate at the pixel level of an image, allowing developers to control the color, brightness, contrast, and other visual attributes of each pixel. Essentially, pixel shaders determine how pixels on a screen are rendered and displayed based on the lighting and material properties of 3D objects.
  • ATI Technologies (now AMD) introduced competitive offerings, such as the Radeon 8500 series, which also supported programmable pixel shaders and advanced 3D rendering features.

 

 

 

DirectX 10 and Graphics Compute

  • DirectX 10, released alongside Windows Vista in 2006, introduced new graphics features, including improved shader model support, geometry shaders, and more advanced texture compression techniques.
  • NVIDIA launched the GeForce 8 series, which supported DirectX 10 and introduced CUDA (Compute Unified Device Architecture), enabling developers to use the GPU for general-purpose computing tasks beyond graphics processing.
  • ATI’s Radeon HD 2000 series also supported DirectX 10 and introduced similar compute capabilities with its Stream Processing Units (SPUs).

 

Present and Future: Ray Tracing and AI

  • Recent years have seen a focus on ray tracing technology, which simulates the behavior of light in a scene to produce more realistic graphics.
  • Jensen Huang, NVIDIA’s CEO (Fig. 3), introduced the NVIDIA’s RTX series which introduced dedicated hardware for real-time ray tracing acceleration, enabling smoother performance in games that support ray tracing effects.
  • Both NVIDIA and AMD have integrated machine learning capabilities into their GPUs, allowing for AI-based features like DLSS (Deep Learning Super Sampling), which uses AI algorithms to upscale lower resolution images for better visual quality.

Fig. 3 Nvidia founder, president and CEO Jensen Huang displays his tattoo. (credit – Robert Galbraith/Reuters)

 

 

Summary

  • Throughout this history, graphics cards have evolved from simple 2D accelerators to highly parallel processing units capable of powering immersive virtual worlds and enabling cutting-edge visual effects in games and other applications. The competition between industry leaders like NVIDIA and AMD has driven innovation, pushing the boundaries of what’s possible in computer graphics.

 

References

 

 

  • Baum, D. (1998). 3D graphics hardware: where we have been, where we are now, and where we are going. SIGGRAPH Comput. Graph., 32(1), 65–66.