Category Archives: Cameras

Monkeying around with the Lytro Camera…

A couple of people on my twitter feed yesterday (aside: I tweet using @brainwagon, and passed 5000 tweets yesterday) had questions about how this light field camera worked, how fast the sensor was, how long it takes to acquire the image, etc… While this is the first Lytro camera I’ve ever had the time to tinker with, I did spend a couple of years doing R&D on computational photography in general and light field photography in particular, so I am pretty familiar with how these things work, and combined with information like the Lytro Meltdown and the lfp splitter, I was able to tear apart my example the files for the example “monkey” picture I took yesterday. For completeness:



So, how does the Lytro take pictures that can do this?

First, let’s take a look at the cross section of the camera, thoughtfully provided from Lytro on the web:

lytro

Despite it’s kind of primitive outer appearance, inside it’s remarkably complex. As someone who played a small part in the optical design of a similar camera, it can be remarkably tricky. But you might look at it and say: “Gosh, it’s a telephoto lens, big whoop! Where is the ‘secret sauce’?”

It’s in the area labelled light field sensor. Instead of having an ordinary CCD which simply samples the illumination on the focus plane at a bunch of individual locations, the light field camera has a micro lens array: an array of tiny lenses which allow the camera to not only measure the total illumination arriving at a location, but it’s distribution: what proportion of that light is arriving from each direction. It’s this property that will eventually allow the computational magic allows refocusing.

You probably aren’t able to visualize that very well (I certainly couldn’t when I began), but here’s an example which may (but probably won’t) help a bit. Even if you don’t completely get it, it’s kind of cool.

Using the lfpsplitter tools above, I extracted the “raw” pixel data from the monkey snapshot I did. If you are familiar with the way most cameras work, you might know that inside digital cameras is a sensor which can be thought of as an array of pixels. Some are sensitive to red, some green, some blue, usually arranged in an grid that is called a a Bayer filter or a Bayer mask. Software in your camera is responsible for looking at each individual R, G, and B pixel and combining them to produce RGB pixels of a resolution lower (usually by 1/2) of the native resolution of the sensor. The image below is a similar “raw” image of the sensor data coming from the Lytro. It is represented as monochrome values, each of which is 16 bits. It looks dark because all the processing of the Bayer filtering, exposure, color balance etc has not been done. The original images are 3280×3280, which I’ve shrunk down to fit on this page.

Screen Shot 2015-04-25 at 9.10.30 AM

You can probably see monkey, but might ask, “again, what’s the deal? Seem just like a dark, bad image of the monkey?” Let’s zoom in.

Screen Shot 2015-04-25 at 9.10.44 AM

And further?

Screen Shot 2015-04-25 at 9.11.04 AM

And finally down at the bottom, looking at individual pixels:

Screen Shot 2015-04-25 at 9.11.15 AM

The large image is actually made up of little tiny circular images, packed in a hexagonal array. Each pixel is about 1.4 microns across. The circular images of each lenslet are about 13.89 microns across. The rectilinear “gridding” artifact you see is from the Bayer mask.

Pretty nifty.

The software that gets you from this raw image to the final image is actually non trivial, in no small part because the calibration is so difficult. But it’s awesome that I have a little gadget that can acquire these raw light fields (our prototypes were far bulkier).

Last night, I spent some time trying to understand the Wifi protocol, and wrote some code that was successful in receiving the callback messages from the camera, but had a bit more difficulty with understanding and getting the command messages to work. The idea is to create a set of Python programs that will allow me to pull this kind of raw data from the camera, without needing to go through the Mac OS/Windows Lytro Desktop software. If anyone has done this, I’d love to compare notes. Stay tuned.

On early camera lenses…

I like it when my life’s experience and interests toss me an opportunity, and out of the blue last week I received an invitation to help with a class a colleague is trying to put together to help people build their own cameras, and he wondered if I could give an hour or so introduction into camera lens design. It’s really odd that I know anything about camera lens design, but when I was really into building telescopes, I acquired and read a fair number of books on optics and cameras, and in my job it’s proven occasionally useful. (I even managed to be a coinventor of a light field motion picture camera.) But really, it’s always been pretty much just a hobby.

Which isn’t to say it isn’t fun, and interesting, and an opportunity to build some cool stuff.

The history of camera lens design is actually pretty nifty, and goes back over two centuries, even predating the invention of film (early lenses were used in camera obscura). I remember reading (and subsequently forgetting a great deal) of the history of the camera lens in a borrowed copy of Kingslake’s History of the Photographic Lens (a great work, I should someday purchase a copy of my own). But I do have copy’s of Conrady’s Applied Optics and Optical Design. This book was written in 1922, and detailed the mathematical design methods used to design a variety of optical instruments. In particular, I recalled a particular design consisting of a “stop” in front of a simple, concave forward positive meniscus lens. I couldn’t recall the name, but a few minutes of Googling reminded me that it was called the Wollaston landscape lens.220px-WollastonMeniscus-text.svgThe lens is, well, just lens and a stop, but can yield surprisingly good images. The simplicity also makes it a great lens for experimenting with simple primitive cameras. The lens is typically mounted in a barrel that accepts cards with different size holes for the stop, about 15% of the focal length in front of the meniscus. When the lens is stopped down to about f/16, the performance can be quite good over fields of about 45 degrees or so. Conrady’s book covers the design of such a lens, and tells you exactly how to optimize the system, but frankly it probably doesn’t matter that much. I’ll probably review that material, but I doubt doing any math is called for in this class. I suspect we’ll just select some roughly appropriate lenses from Surplus Shed and have at it.

A former Pixarian and colleague, Craig Kolb (along with Don Mitchell and Pat Hanrahan), did a really nice paper back in his Stanford days entitled A Realistic Camera Model for Computer Graphics which showed how you could simulate more complex camera lenses which have many subtle effects not usually captured by the simple pinhole projection model used by most rendering software. I can’t remember if I reviewed the paper for SIGGRAPH, or if I just talked to him about it, but I always thought it would be cool to try to simulate one of these simple camera lenses and show how the “defects” of these simple lenses could be appropriately simulated in CG. He never did it, and neither did I. It still remains on my list of projects to do.

One good thing about looking at these old designs is that a lot of information can be had from materials which you can get for free online. A bit of googling revealed this nifty little book (digitized by Google and available on Google Play) which has a lot of good information about the Wollaston landscape lens, and other simple lenses of the day. It’s conveniently out of copyright, so free to all.

Bolas and Brown’s The Lens

Hopefully more on this to come.

Addendum: Bolas and Brown’s book is not without it’s problems: I saw this diagram while reading it, and realized that it’s not accurate. Off axis parallel rays should be focused, well, off axis, this diagram shows them coming to focus on the optical axis. Whoops!