I’ve recently come to the conclusion that most people believe that it is more important to have an immediate and distinct opinion rather than an actual informed opinion. This was brought home to me in the aftermath of the Columbia disaster, when various newsgroups like
sci.space.shuttle were flooded with premature speculations and cries for the heads of NASA management, long before any sober consideration of the facts was possible.
Of course if you are looking for this kind of thing, there is no better place than Slashdot. Case in point, the recent article Slashdot | Pixar Eclipses Sun with Linux/Intel
The resulting flood of messages runs a wide gamut, but many fo the articles which were moderated up held little information, or even actual misinformation.
I was under the impression…
That Sun had tried renderman (or whatever they call it) to run on 32 bit processors and it was a horrible disaster. Something about how it seemed more feasible and cost efficient to use Sun until the days in which the competiting 64 bit processors became cheaper.
I could have sworn that the software couldn’t run at all in 64 bit. I’m just wondering if they didn’t take a step down when they converted 64-bit optimized code to run on regular high cache 32-bit pentiums.
First of all, Sun doesn’t own or develop RenderMan. Pixar Animation Studios does. Until a few
years back, Sun platforms made up the bulk of RenderMan sales. The availability of cheap Intel
boxes and the maturing of Linux as an inexpensive operating system have made it a choice which
a large number of RenderMan customers now use.
RenderMan is a very portable system, the history of which goes back at least 15 years. It has been ported to 64 bit machines, including the DEC Alpha and the Sparc.
I’m actually a little surprised they use general purpose CPUs for this kind of task. I’d have thought that a load of custom DSPs might be faster, and probably cheaper – How about 1 DSP per pixel (About 10 million?). I’m sure that would really zip along, if they could sort out the memory access issues inherent in this kind of application. Ray tracing is perfect for parallel execution, since each pixel really is independent of each other pixel, and each frame is likewise independent.
This is a particularly silly opinion because it sounds plausible unless you actually know anything about the scale and scope of what actually producing a movie entails. Pixar used to manufacture special purpose computer graphics hardware. They lost money and they stopped. In its place they created a new portable rendering system, and then have ridden 15 years of Moore’s law to make it 1000 times faster, merely by expending the time of a mid-range software guy to spend a few weeks porting to each new architecture. If the commentor could describe how Pixar could construct
custom hardware with releases every 18 months that double in speed cheaper than that, then I’d like to hear from him.
As for the old parallel raytracing comment, people who make this comment have never tried to write rendering software, particularly software that can handle the large scenes which are typical of movie production. In principle such an application is trivially parallelizable, but the reality has lots of details which are hard to manage.