Add Another Core for Faster Graphics

August 30, 2006 | General | By: Mark VandeWettering

Lately, Intel has been trying to promote the idea that with the advent of multiple processor cores on their machines, raytracing computer graphics scene in real time will not only become feasible, but perhaps the preferred way of rendering graphics for applications such as games.

I must admit, I’ve found some of their work to be rather interesting. Ingo Wald showed me that most of the performance issues that I sorted through almost 20 years ago when I wrote my first raytracer really are kind of questionable when applied to todays heavily pipelined, slow memory, SIMD machines of today. Indeed, the performance that I get from my toy raytracer that I wrote about five years ago is probably an order of magnitude or more slower than the best codes he’s managed to code up.

But still, I think they are quite a ways off if you talk about them replacing traditional GPU style rendering. The demonstrations that we see of RT raytracing are impressive technically, but typically lack much of the features that we’ve come to expect in game graphics. Images are typically poorly antialiased. Shadows are often sharp, being computed from single ray samples. There doesn’t seem to be a good solution for animating objects. Texture filtering is often poor. And doubling the resolution in X and Y typically
increases the work (and decreases the frame rate) by a factor of four. In short, there just seems to be a lot to figure out still.

Addendum: I was thinking about this because I got sent a link to a discussion on Slashdot. This reply voices some of the same concern.

You know, I don’t read Slashdot anymore. I didn’t actually make a conscious choice to do so, it just seems like there is seldom anything being discussed there of interest. Has anyone else abandoned Slashdot (not out of principle, but just because it isn’t that good)?

Addendum2: Oh, and this paper (PDF) from Intel talks about effects such as ambient occlusion, and illustrates it with a picture from Pixar’s Monster’s Inc. The only problem: we didn’t actually use raytracing and ambient occlusion on Monsters Inc.

Addendum3: The San Jose Mercury News has a relevant article on the chess game that the CPU and GPU manufacturers are playing to try to position themselves in this new, bold, all raytracing world.

Addendum4: Wayne and Julian informed me that indeed, the image that is presented in the Intel paper above does have ambient occlusion in it. It was a special one-off that they did for their own SIGGRAPH paper, and used models from Monsters Inc. but never appeared in the movie. The 2003 Copyright date should have tipped me off.

[tags]Ray Tracing,GPU,Computer Graphics[/tags]

Comments

Comment from Wayne
Time 8/30/2006 at 1:01 pm

However we did use ambient occlusion while rendering the picture in Figure 1 that is reproduced from the paper referenced in Intel’s paper. 🙂

Comment from Julian
Time 8/30/2006 at 1:57 pm

Re: addendum 2 – hrm, they “burrowed” that picture from our paper, and that image actually was generated with ambient occlusion (as a one-off, obviously we didn’t use it in the movie).