I suspect the world would be better if that percentage were even greater.
Fourier Volume Rendering
Three years ago, I wrote a short post about volume rendering. I always meant to follow up, because I finally sorted out the problems with generating multiple, overlapping images. Here’s a new video generated with the improved code:
Fourier volume rendering is less flexible than raytracing, but it does have certain computational advantages, most notably in that you can generate images in less than O(n^3) time, which is the typical bound for conventional raytracing. It also has some applications in light field refocusing applications. But overall, I still thing that raytracing has a number of advantages. Perhaps that would make a good first serious Go program? I’ll ponder it more.
Comments
Comment from StefanBanev
Time 8/14/2012 at 11:27 am
>less than O(n^3) time, which is the typical bound for conventional raytracing
It’s apparently true for naive (brute force) volume-ray-casting. For the adaptive volume ray casting (for the coherent data like CT/MRI/PET) the time complexity may go down to ~O(n^2 * Lg2(N)) where n^2 is the number of pixels on projection plane; were N is the size of one side of volume-cube. In fact, n^2 is the worst case scenario, thanks to coherency of projected scene n^2 may go down considerably x5 … x10 times without noticeable quality compromises (in the same way as slightly lossy jpeg). The gallery at
http://www.youtube.com/user/stefanbanev
illustrates well the interactive performance/data-size-scalability on previous generation of Intel multi-core CPU. The Sandy Bridge based multi-core CPU speeds the rendering up by ~ 40%. I’m not aware about any GPU base volume ray casting capable to get even close in term of interactive quality to this CPU based one; in my opinion, the major reason is SIMD nature of modern GPU preventing to implement a sophisticated adaptive algorithms per ray….
Comment from Elwood Downey, WB0OEW
Time 8/18/2012 at 11:22 am
Stefanbanev: thanks for the vid, yes, that is exactly what we were trying to do. But my gosh, this is WAY better than our 256^3 voxel cubes; it’s rather shocking to be away from a topic and suddenly see what has happened after 25 years.
Comment from Elwood Downey, WB0OEW
Time 8/13/2012 at 9:00 am
Back in the 80s, I was the chief programmer at a startup working on a system that took in CT or MRI slices and rendered them in 3D using voxels. We built custom bit-slice hardware so we could render and manipulate the scenes in real-time. We added surgical editing tools and multispectral tissue classification color coding so the surgeon could plan pathways through the tissues. I recall we visited Pixar to explore possibly using their new geometry engines but we decided our application was closer to image processing. I wandered off into astronomical engineering after that so I’ve lost track of the state of the art in that area but it was pretty ground breaking stuff back then and great fun.