Things CPU Architects (and others) Need To Think About

August 13, 2004 | Link of the Day, Rants and Raves | By: Mark VandeWettering

Bob Colwell gave an interesting talk at Stanford about his experiences as Chief Architect of Intel’s IA32 processors from 1992-2000. I spent an hour and a half watching the video download, and thought it was an interesting look into where CPU design is going, not going, and what that means for products.

I’m not a huge hardware guy, but the level is pretty straightforward, I had no problem following his presentation.

The points that really struck home were really about complexity. At Pixar, I spent well over a decade working on RenderMan, Pixar’s core renderer that we have used for all our films. Many of the criticisms that Colwell had for later architectures (increasing fragility, difficulty in extension, maintaining backward compatibility, shrinking ability to keep track of all aspects of the design) are common in large software projects as well. When I finally left the RenderMan group, I suspect that I understood about 85% of the renderer really well, which was probably just about as high as anyone, but it was clear that there were many subtle interactions amongst features which led to confusing performance variations. Often trying to tune these variations was like moving piles of sand: you move some sand, but you always leave a little behind and pick up some dirt too.

Colwell ultimately is concerned that that if CPU manufacturers continue to try to play the “more transistors, more die, more speed, more heat” game they’ve been playing, then they will have only a dead end architecture that they can’t sell because nobody wants a processor that dissapates 2KW.

Good stuff.