My Arduino bumper, with actual prints!

Okay, our Replicator 2 went back online this week, and I decided to give printing my Arduino bumper another try. Since the last time, I have revised the program and code a couple of times. I was concerned that the various bits of solder protruding from the bottom of the board would need extra relief cuts. As I tried to account for more and more of these, I decided I did not like the design especially well, so I took a different tactic: I decided to make the bumper’s walls as thin as possible, and only include an area around each of the Arduino bolt holes. This minimized the amount of material needed, and also means that it could be printed more quickly.

And so, I printed it out. I use the Makerbot software, set for the Replicator 2, once using medium quality (with a print time of about sixteen minutes) and once using the high quality (print time of nearly an hour). I used 25% infill for both, I wanted to make sure the outer and inner shells were sufficiently bonded. I then printed it!


I had a little spare time, so I also decided to print out a simple model I made in OpenSCAD: a cross section of the Clark Y airfoil, with 1/8″ thickness. It turned out rather well, fairly smooth, and with only a minor divot at the trailing edge. It was also reasonably sturdy, I don’t think I could crush it in compression with my hand, although it’s impact resistance is unknown. I also took the time to download a model of the chicken from Minecraft and print that. It comes in four parts, which we could then glue together. I had no real failures:


But there were a few issues.

First of all, my Arduino bumpers were a pretty tight fit.

I coded in a clearance of 0.008″ around the nominal board size, but that was simply not enough. I think I’ll expand that to 0.012″ or even 0.015″ next time around. I was able to press fit one of my OSEPP Arduino boards into it, but just barely, and only by shaving a small amount off one corner of the board with an Exacto knife. I also think I should thicken the bottom slightly: if you used this shield to bolt against a conductive surface, some of the solder joints on the bottom of the board could still short out. I am also thinking of widening the channels cut for the USB and DC jack a small amount, they fit, but just barely. An additional 0.02″ of an inch would make them more comfortable. I also noticed that one of the prints had a corner which seemed to pull up and not be level/coplanar with the rest of the print. Not sure what that was about. But overall it worked! I’ll make these changes to the design, and then try another set of prints, and then you should be able to see it on Thingiverse.

I also printed this model of the chicken from Minecraft. It comes in four parts (body, head, two feet) which you can assemble and paint. The model is quite simple, I printed it with medium quality settings and 10% infill. It worked rather well, except that the parts do not assemble easily: the head is slightly too wide to fit into the slot in the body and the feet do not fit into the holes left in the bottom. I think a little judicious belt sanding will make the head fit, and I’ll measure and redrill out the holes to make the legs fit. But in general, the issue of clearances seems to be one I need to explore more. Does anyone have any good references/hints/guides they would like to share?

Creating an Anaglyph from a Stereograph

Back in 2005, I wrote a little blog post about creating stereo images with the GIMP along with a screencast. Little did I know that a couple of years later, I’d end up learning far more about stereo imaging when I became the stereo rendering lead for Toy Story and Toy Story 2. This post has circulated back to the sidebar in my “years ago on this date” section, so I thought I’d resurrect this video (the links were broken) and even though it is dated, it’s still pretty good information, so check I thought I’d upload it to (I would have done Youtube, but it’s 11 minutes long, and Youtube rejected it) and make it available again. Enjoy!

More Fuji Real 3D W1 experiments (and an anaglyph)

Well, I’ve tinkered around a bit more, and discovered a few things: the camera doesn’t actually do very much to align images. In fact, there is a fairly significant vertical misalignment between the two camera (I’m guessing around the order of a dozen pixels) which doesn’t really hurt things too bad, but which annoy me as the perfectionist stereo practitioner I’m being paid to be. 🙂

In yesterday’s article, I used exiftool to extract left and right images, which works just fine. I just composited the two images directly together, which resulted in a substantial amount of parallax over most of the image (the image was entirely behind the screen, and there was significant disparity). We can shift the images’ relative position by hand to reduce this overall disparity, and move the objects closer to the real screen, which makes the images easier to view and fuse. I decided to give this a try using GIMP, and basically constructed the anaglyph below using the following steps.

  1. Extract left images and right images from the MPO file as yesterday.
  2. run “gimp l.jpg r.jpg”
  3. I like the traditional anaglyphs, so I converted each image to BW, then back to color to produce gray scales of both images.
  4. I then added a new layer atop each image. For the left image, I filled that layer with a solid red. For the right image, I filled it with a cyan layer.
  5. Select the “Screen” layer mode for both of the new layers. This should let you see both a red colored and cyan colored image.
  6. Use “Merge down” to convert each layer into a single layer.
  7. I then copied the red layer, and used “Paste into” to put it on top of the blue layer.
  8. Change the new layer mode to “Multiply”. Voila! You have an anaglyph. You can use the shift tool (while wearing anaglyph glasses if you like) to adjust and align the layers as you see fit.
  9. Crop and export!
  10. In the photo below, Luigi’s eyes should be roughly at screen depth, so his front bumper will appear modestly in front of screen. I probably could still be a bit better on the vertical alignment, but it’s not bad.


    Addendum: Here’s Bruce!

Fujifilm Real 3D W1 Camera

Today we got an interesting new toy in the lab, a Fujifilm Finepix Real 3D W1. It’s a very cute little camera which you can think of as being the modern day equivalent of the old Nimslo 3D cameras. It has two lenses, and can acquire both 3D and regular 2D images, which it displays on a built in 3D lenticular display on the back of the camera.

But that’s not that much fun, you’d have to pass the camera around, and let’s face it, the display is pretty small. Fuji is supposed to have a digital picture frame which can display these images in 3D, and is also supposedly going to have a service bureau so you can get lenticular prints made, but again, that’s not much fun, at least not today. So, instead, I decided to see what I could learn about the image formats that it uses.

For still imagery, it writes both a .JPG and a .MPO. The JPGs are just standard JPG images, and can be read by pretty much anything. The MPO files are Multi Picture Objects, which are a format that I hadn’t seen before. Digging around a bit, I found out that (with some complications) they are mostly just two concatenated JPG images. Most JPEG readers seem to open and read the first image (which I believe to be the right lens image) without any difficulty, so if you weren’t interested in the stereo image, you could pretty much just treat them like JPGS (although iPhoto doesn’t even try to notice them, and refuses to try to open them).

A little more digging revealed that there is a tool called exiftool which can be used to extract the images. After you install exiftool, you can run:

exiftool -trailer:all= input.mpo -o R.jpg
exiftool input.mpo -mpimage2 -b > L.jpg

to extract the left and right image. If you have ImageMagick installed, you can create a red/blue anaglyph with a command like:

composite -stereo L.jpg R.jpg stereo.jpg

Here’s an example that is just an image that I shot as a test:


I also did a bit of experimentation with video. It appears that it records an AVI file with two different video streams and one audio stream. I did some quick tests using mplayer to dump frames of each video stream into separate directories, combining them with composite, and then making a video back out of the resulting frames. It worked, but the example footage was terrible, and I could work on improving the results, so I’ll hold off on that example for now.

Overall, it’s a pretty neat little gadget, but I must admit you can buy a much better camera for the $600 price tag that it costs. It’s neat not just for what it can do, but because it demonstrates some of the capabilities that I think cameras of the future will have.

More about it after I play with it a bit more.

‘Toy’ Stories in 3-D – Buzz Lightyear Finds a Dimension

Well, today’s the day. For about the last year, I’ve been working away deep within the halls of Pixar Animation as part of a fantastic crew, and today (and for the next two weeks) you all can see the result of our efforts: the conversion of the original Toy Story and Toy Story 2 into 3D. For many, this is an opportunity to see a pair of classic films on the big screen, and in a new way that we all hope you will enjoy. Check it out, and let me know what you think!

‘Toy’ Stories in 3-D – Buzz Lightyear Finds a Dimension –