GameArchitect.net: Musings On Game Engine Structure and Design

 

Home

What It's All About

Who Is This Guy?

The List

Complete Archive

Personal 

RSS Feed

People




 

 

Siggraph2004

By Kyle Wilson
Sunday, September 12, 2004

SIGGRAPH this year seemed to cover much of the same territory as last year.  High dynamic range (HDR) images, parametric radiance transfer (PRT) and animation synthesis all got lots of play again this year, but no longer seemed quite as new and impressive.  The hot new topic this year wasn't related to graphics theory at all, but was the use of the GPU for general purpose computing.  GPGPU, they call it.

I avoided most HDR and all of the PRT stuff.  I caught a couple of the animation synthesis talks and much of the GPGPU course, which I'll get to later.

Charles Poynton spoke with great enthusiasm about color science and human perception, making the subject much more interesting than it would otherwise have been.  Interesting point:  display gamma can be considered a form of image compression, since it resembles the inverse of the human perceptual curve.  The eye perceives relative differences in brightness, and can detect about a 1% difference in physical intensity whether that intensity is dim or bright.  This is a power curve approximately the inverse of gamma.  This conveniently allows a full range of perceptible intensities to be stored in eight bits.

I made just enough of the course on hair rendering to catch the guys from Rhythm & Hues and Sony Imageworks talking about their hair creation tools.  Both toolsets were pretty similar, though the Imageworks guys worked entirely in Maya plug-ins and the Rhythm & Hues guys worked entirely in their own proprietary tools.  Both approaches combined artist control over "guide" hairs with automatic generation of hairs in between.  Generated hair interpolated some values from nearby guide hairs (length, orientation) and read some values out of texture maps (hair density, thickness, etc.).  Higher-level groupings and modifiers allowed adjustments to all the hair of a given type at once, so an artist could make a whole undercoat thicker by just adjusting a slider.  Right now, we get more bang for the buck in real-time graphics rendering hair with shells and fins than sculpting lines or ribbons this way, but in another five years or so, we'll probably need tools like these.

I wasn't very impressed by Bruce Sterling's keynote.  I've read some of Sterling's books and enjoyed them.  The guy has a way with words.  He strings together cool phrases like "varnish on barbarism" and "terrorism-entertainment complex."  But the phrases are as empty of actual meaning as they are rich in emotional power.  Underneath it all, Sterling's thesis seems to be that we're heading toward a future rich in open-sourced 3D smart objects, where everything thinks, everything knows how to maintain itself, dispose of itself, improve itself and manufacture more of itself.  It's a Wired magazine geek nirvana of a future where everyone lives only to tweak the open-source code of his own belt buckle, making it smarter, more self-aware, more enthusiastic about holding up pants.  I don't agree with Sterling's implication that every object would be better if only we could somehow cram a CPU into it, and I definitely don't think, as he implies, that this would somehow make our world somehow ecologically sustainable.  I mean, great, my belt buckle can direct me to the nearest landfill... until it breaks, and I want to throw it away.  This is going to save the planet?

I caught a little bit of the course on Real-Time Shadowing Techniques.  The course covered recent innovations in volume shadows and shadow maps.  Some interesting variations on Perspective Shadow Maps were introduced, most strikingly a rendering approach that uses a negative near plane to render shadow maps of objects behind the camera.  The original publication on this was apparently from Simon Kozlov in GPU Gems, which clearly I'm going to have to read.  Soft shadow algorithms for volume shadows and shadow maps were described, Penumbra Wedges and Smoothies respectively.  Both algorithms looked pretty good, despite having little or no physical basis.  Both seemed pretty painful to implement.

The Real-Time Volume Graphics course had the most intriguing presentation I saw at SIGGRAPH this year, by Daniel Weiskopf on GPU-Based Ray Casting.  Weiskopf introduced an algorithm for doing ray-casting through a density volume texture in multiple passes by rendering full-screen quads, passing in ray parameters for each pixel, and sampling the density texture at regular intervals.  He then introduced optimizations to allow sampling of the density texture at irregular intervals, skipping of empty space and early termination of saturated pixels.  Weiskopf then went on a mind-blowing extension of his ray-casting algorithm in which he rendered tetrahedralized point-sampled data sets.  Since he's working entirely in a pixel shader, all the "mesh" data of the tetrahedra need to be encoded into textures.  He encodes every tetrahedral cell into several textures representing vertex positions, face normals, densities and, most importantly, adjacency information.  Then he walks the sample set, using the adjacency texture to advance each ray through the cells.  It's really ingenious.

This was inspiring enough that I decided to attend the GPGPU course the next day.  This was in many ways a beginner's introduction to the way of thinking I'd been plunged into the day before.  Doing operations so far removed from traditional graphics on the GPU breaks your mind, and gets you thinking about ways to solve your problems that you've never considered before.  The course introduced a variety of problems that could be parallelized and solved on the GPU and a variety of improbable ways of organizing data to make the process efficient.  Read the website.  It explains things better than I could.

The same day I encountered another neat idea, Li-Yi Wei's work on Tile-Based Texture Mapping.  Wei hides repetition and tiling artifacts for tiled textures.  For a fixed number of distinct tile borders, Wei defines an algorithm that encodes all possible tile permutations using those borders into a single seamless (and therefore mip-mappable) texture.  At run-time, when the new "meta-texture" is rendered, a pixel shader computes which row and column in the tiling pattern contains it.  It then computes the borders for the current tile, using a hash function, and looks up the tile with the correct borders in the pre-computed tile texture.  Thus, infinitely large surfaces without visible tiling can be rendered, though mip-mapping artifacts do become apparent in the distance.

I got to listen to EA Canada's Henry LaBounta give exactly the same talk twice about the art design of SSX3, first as part of the Art on the Small Screen tech sketches, then an hour later as part of the Next-Generation Game Visuals special session.  SSX3 is a great-looking game, and it was interesting hearing LaBounta talk about how they'd applied techniques from film to give different levels a distinctive look.  But it was a lot more interesting the first time.

Also interesting was Habib Zargarpour's talk about the art design of Need for Speed Underground.  He said that the art style was selected to reinforce the key element of the game:  speed.  Hence the wet, reflective look (reflections flash by faster), motion blur and camera shake.  Viktor Antonov talked about the design of Half Life 2, making it sound, though he didn't come out and say it, like the Anti-Doom.  The look is open, urban, washed out.  The outdoor environments he showed were an interesting mix of low poly and high poly models.  City blocks are filled with big square apartment buildings with little detail, but little antennae cover their roofs and fine telephone lines run between them.

Finally, Nishii Ikuo discussed the development of that giant Onimusha 3 opening movie.  180 developers.  9 months.  For a pre-rendered cinematic.  'Nuff said.

On a related note, representatives of Sony Computer Entertainment, Vicarious Visions and the ubiquitous Electronic Arts participated in a panel discussion entitled "Games Development:  How Will You Feed the Next Generation of Hardware."  Everyone knows which way the industry is going.  As with the transition to PS/2 and Xbox, team sizes are about to make a quantum leap in size.  Game developers are called on to make content at ever-higher levels of fidelity, requiring more content developers and larger budgets even for games of more modest scope.  But the shelf price of games isn't going up and neither is the number of units sold.  The industry is going to be squeezed, and those who can't make solid hits on budget are going to fail.

At least that's my take.  The panel didn't really present any solutions, aside from vague mumblings about tracking costs and emphasizing good process management.  (The implication, presumably, is that the game industry has never needed good management before.)  There were hints that the game industry should to switch to a movie production model of development.  Everyone always says but it's never going to happen unless all the game companies in the U.S. re-locate to spots within a couple of miles of each other, which doesn't seem very likely.  Mostly, though, everyone just restated the problem.  Change is coming, and no one's quite sure what to do about it.

Any opinions expressed herein are in no way representative of those of my employers.


Home