Category Archives: Computer Graphics

Real-Time Rendering Blog

Back around 1984 or so, I first became interested in computer graphics. I was going to college at the University of Oregon, and we didn’t really have any graphics courses or any computers that you would think would be good at displaying graphics. Eventually they got a Tektronix 4115 terminal (which was huge, and cost about $20K in the day if memory serves), which I got hooked to our VAX 11/750, and I had my platform. I started to read and experiment with raytracing. Some early enthusiasm and guidance was provided over USENET by Eric Haines, who I must thank for helping inspire and inform me in those early years. Eric’s patience has shifted from being measured in Blinns to milli-Blinns now, and he’s authored a book called “Real-Time Rendering” and also maintains a very useful blog on the subject. Every once in a while, I glance over and realize that the world is changing, and I’m not keeping up. But if you want to keep up with developments, Eric’s blog will be a good place to start.

Real-Time Rendering · Tracking the latest developments in interactive rendering techniques.

Kurt Akeley’s Publications

The other day I was lucky enough to be invited by Bob Whitehill to share a lunch up at UCB with Marty Banks, Kurt Akeley and a bunch of other vision researchers from their lab. I was lucky enough once to have been interviewed by Kurt when he was at SGI back in 1994 or so. His office faced Moffett Field, and on that particular day, they were test flying Harriers. My recollection was that we spent time mostly discussing a program that he wrote in OpenGL to emulate the barrel distortion of fisheye lenses, and watching the Harriers go back and forth. I didn’t get the job offer, and Kurt had no recollection of it. Oh well. I suppose I remember that day pretty well because after my SGI interview, I went directly to a first date with the terrific lady who would later agree to marry me. October 25th, 1994.

Since I’ve been living in the bowels of Pixar production, I must admit that I haven’t been keeping up with Kurt’s doings, and it seems like that is a big mistake on my part. In trying to look up a paper that he co-authored with Marty et. al on stereo displays, I found his page at Microsoft which links to a lot of interesting material.

Archived for future digestion.

Kurt Akeley – Microsoft Research

Day 2 of the NVIDIA GPU Technology Conference

Yep, I’m actually at the NVIDIA (why do I type NVIDIA like nVidia? everything seems to indicate that it is all caps now, I dunno) GPU Technology Conference, trying to once again get into the swing of things with respect to GPUs. Sadly, almost everything I know about computer graphics (which is actually considerable) predates the ongoing revolution that GPUs have brought to the table. We’ll see if some of this sticks this time around. For the first time, I’m beginning to grasp the functionality of current machines, and see where the evolution is going such that I feel it might be worth committing a significant fraction of my (seemingly always shrinking) store of available brain cells.

It’s about time for the 2nd day keynote by Hanspeter Pfister from Harvard, to start. I was lucky enough to serve with Hanspeter on a SIGGRAPH Sketch committee a few years back, and it seems like he’s been busy since then.

I might blog more about what’s going on here, but you might as well track what’s going on using NVIDIA’s official conference blog:

GPU Technology Conference Blog – NVIDIA.

Micro-Rendering for Scalable, Parallel Final Gathering

Thanks to Kevin Bjorke for pointing out this paper. It combines a couple of interesting features to create a point-based renderer that efficiently uses the GPU to render scenes with global illumination. I’ll have to read it more carefully when I have time.

Micro-Rendering for Scalable, Parallel Final Gathering.


httpv://www.youtube.com/watch?v=Z9u8EdFbmiI

Volume Rendering: Going boldly where all have gone before…

Image from CT data downloaded from http://graphics.stanford.edu/data/voldata/

Image from CT data downloaded from http://graphics.stanford.edu/data/voldata/

Okay, this is a bit gruesome, but I’ve been dusting off some old papers that I never really understood on Fourier Volume Rendering, and testing my understanding by writing some simple code that takes in a volume dataset and uses the fast Fourier transform to convert it into simulated X-ray pictures. The data set that this picture was generated from was downloaded from this webpage at Stanford, and consists of CT scan data of a cadaver. The basic code works! Given a bit more work, I should be able to do arbitrary orthographic views.

The math for all this is described quite admirably by Tom Malzbender’s paper Fourier Volume Rendering.

Addendum: Here is the raw data slices:


httpv://www.youtube.com/watch?v=UrwnuEtNUKc

Addendum2: Here is a link to some more interesting volume datasets.

Addendum3: Here’s my attempt at rotation. It doesn’t really work right yet. I think I understand why.


Loren posts Vol Libre on Vimeo

A few days ago, I posted links to a couple of videos that used fractal rendering techniques. One was the Genesis effect of Star Trek 2, Wrath of Khan, the other, a 4KB demo contest entry. My intent was to show how computer graphics have evolved over the past 25 or so years. But the reason I chose the Genesis effect was that I couldn’t find an online copy of a work called Vol Libre, Loren Carpenter’s 1980 film. I bump into Loren occasionally at lunch here at Pixar and mentioned it in passing.

I don’t know if this is a coincidence, but today on facebook, he announced that he had uploaded a copy of it to Vimeo. Thanks a bunch Loren! Sit back and enjoy the vintage computer graphics from a true CG pioneer, and you’ll probably be shocked to find out how similar it is to stuff you’d see in the modern “demo scene”.

Vol Libre from Loren Carpenter on Vimeo.

Variations on a Theme In Computer Graphics History

It was originally my intention to compare and contrast Loren Carpenter’s Vol Libre, a mile stone in computer animation, the first film to use fractal techniques with the second video, which is a 4KB demo contest winner, but sadly, I was unable to find Vol Libre on Youtube. So, instead, I’ll compare it to the Genesis effect sequence from Star Trek (what was I thinking? Thanks Robert!) II: Wrath of Khan. After all, Loren worked on that one too.


httpv://www.youtube.com/watch?v=QXbWCrzWJo4

httpv://www.youtube.com/watch?v=_YWMGuh15nE

What a difference a couple of decades of hardware makes, huh? I’ve managed to live through both of these periods. The field still amazes me some times.

@ SIGGRAPH 2008

Well, I’m sitting at The Standard (a frankly far too chic hotel for a forty something computer geek like myself), it’s not quite 7 A.M. and I spent my first day at SIGGRAPH. I’m hear mostly for the benefit of recruiting: sitting in the booth, answering questions and showing up at our Pixar User’s Group Meeting. We are handing out 20th anniversary Renderman walking teapots: very nice, I managed to get one for Josh, but haven’t picked up one of my own: I’ll try to later and get a picture here.

I’m not really attending papers (you can get links here) but there seems to be quite a bit of buzz about Intel’s Larrabee architecture. Broadly speaking, the trend of GPUs has been to slowly work to expand both the number and capability of the different functional units: more shader units, that can execute more arbitrary code, and more texture units, which can present results which are available to more units. Larrabee leapfrogs this: we are back to having X86 cores (not my favorite architecture, but ubiquitous) which are fully general, linked together by a fast shared cache with scheduling done in software. To me, this represents the obvious end game to the evolution of GPUs. Companies like nVidia have been trying to tell us that we can use GPUs to do more general computation: Intel has delivered an architecture where that claim is much more obviously true.

Oh well, I’m gonna get some pancakes at IHOP, then walk the show floor a bit. I want to try to see what the state of the art in stereo monitors is, and maybe see who I can bump into. I’ve got booth duty again at 1:30 (more teapots handed out at 2:00) and then the User’s Group meeting at 6:30. If any readers are at SIGGRAPH, feel free to come by the booth and say hi.

Pencil Drawing

Josh over at tinyscreenfuls is digging some of the fancy “pencil sketch” effects that the Mac can do with its internal camera.  Back in 1998, I experimented with writing some filters that did much the same, with some examples that I generated shown on the right.   Macintosh?  I don’t need no steekin’ Macintosh. 🙂

And now, for your next project, render an entire feature length film.   Beneath your desk, you’ll find a pencil, a yellow pad, and a C compiler…