Learning the ropes…

July 24, 2014 | Retrocomputing | By: Mark VandeWettering

Over the past few years, I’ve expressed an interest in the AGC, or Apollo Guidance Computer. If you haven’t had the time to look at it, the Wikipedia page is good enough to get a brief overview, but if you want to dig deep, you can find all sorts of information and simulators.

I found myself looking back into it this week for two related reasons: one, the anniversary of the Apollo 11 landing, which I still remember, and because of a new argument that I’ve read (but won’t dignify with a link) that claims the moon landings were fake because the AGC could not have worked. But I must admit, he pointed at one bit of the AGC, its core rope memory which he claimed couldn’t work. I think the safer claim would be that he didn’t understand how it worked, but when I thought about it, I realized that I didn’t really know how it worked either. And that bothered me, so I thought I’d dig into a bit more.

Here’s a brief, high level introduction:

The basic idea is pretty simple, and relies on using ferrite toroids as transformers. Imagine that you have two wires going through a ferrite core. If you send a pulse in one wire, it will generate pulse on the other wire. This principle is used in all transformers, which may vary in the number of turns to step the voltage up and/or down by varying the number of turns through the toroid. You can generate a simple memory using this principle. This kind of memory is demonstrated admirably by SV3ORA who created a 70 bit ROM that serves as a decoder for 7 segment LEDS A pulse (or pulse stream, even better) on one of the ten input lines generates the appropriate set of output voltages to display the corresponding numeral on a 7 segment LED display. His webpage has some nice, easy to follow circuits, and a cute little video of it working.

But if you look at the diagram for the Apollo Guidance Computer, it looks a little different. It has a series of “inhibit” lines that weave in and out of the cores, in addition to some sense lines.

Screen Shot 2014-07-24 at 8.35.03 PM

The first description I found was around page 90 of this report, MIT’s Role in Project Apollo, Volume 3. But to be honest, I didn’t really understand it. Luckily, I found what appears to be a better description: P. Kuttner’s 1963 paper, The Rope Memory — A Permanent Storage Device. I still need to work through the details, but it makes a lot more sense to me. I begin to see how the address decoding works. I’ll ponder it a bit more, but it is beginning to make sense, and as it makes more sense, I see it for the clever bit of engineering it is. It was a remarkable bit of engineering in its day, and allowed densities of 1500 bits per cubic inch, including all the address decoding. Very cool.

Share Button

I got it! I got it! I really got it!

July 23, 2014 | Baseball | By: Mark VandeWettering

I haven’t had much of a chance to get to ballgames this year. I normally go to about a dozen or so A’s games during a typical season, but this year I basically haven’t made it to any. Life has just filled up with other things to do. But last night, the mystical forces of the diamond converged in the form of a pair of free tickets and a free parking night at the O.co Coliseum. Athletics vs. Astros, woohoo!

It was a beautiful night for a ballgame. Temperature was in the mid sixties or so, with very little wind. At first pitch, it didn’t seem like there would be a very large crowd. There were lots of empty seats. I guessed that fewer than 10,000 fans were in attendance, which was actually kind of okay with me. I like the relatively lay back atmosphere of these mid July games. But as the game wore on, more and more people began to sit down. Checking this morning, official attendance was 22,908. Not too bad.

A very nice game all in all. The A’s gave up 2 runs in the top of the third, but scored in the bottom half and again in the sixth to tie the game. It remained that way until the end of regulation, but L.J. Hoes would end up hitting a home run for the Astros in the top of the 12th, and the A’s went down 1-2-3 in the bottom half.

Ah, but I’ve buried the lead.

In all the years that I’ve been going to ballgames, I have never come away with a foul ball. I have been hit in the head by one, but my slow reflexes, and the near concussion meant that I didn’t come up with the ball on my one best shot at getting one. But last night, I finally did it, in the most surprising way.

Carmen and I were seated in row 29 of section 113, which is directly (but far) behind the visiting teams dugout. The top of the third had just ended, so I was just sitting there, checking my phone when…. suddenly people around me are excited. I look up just in time to see a ball, which literally landed in my lap, bounced against my chest, and stopped. I’m guessing that one of the Astros lobbed this ball trying to get it to the very cool pair of Astros fans in row 20 or so, but had misjudged. And so, this time without the threat of head injury, I got my first game ball:



Awesome! Achievement unlocked.

Share Button

Happy Birthday, brainwagon!

July 21, 2014 | Blogging | By: Mark VandeWettering

bw-birthdayOn this date back in 2002, I started this blog. Since that time, I’ve published 4019 posts, with a total of 725,146 words. I hope some of you have enjoyed it. I’m slacking off, but still find stuff that I think is fun, and hope you drop in from time to time to read and drop me a comment.

Share Button

Large Format Shoebox Camera…

July 18, 2014 | Photography | By: Mark VandeWettering

My recent experiments with large format photography with primitive cameras has me googling and surfing around. In my rampant clicking, I uncovered this very simple camera, which is even simpler than the 4×5 cameras that our class constructed. It’s just a positive meniscus lens with a 120 mm focal length, stopped down to f/90, held in place by something called “patafix” (a kind of clay adhesive) and used to illuminate an 8×10 paper negative. At f/90, it’s definitely straddling the line between pinhole and a real lens. There is no provision for focusing at all. But at f/90, the depth of focus is rather large, and his examples are pretty impressive. Worth checking out. You can see an album of pictures from this camera here on Flickr.

Share Button

Another image from my foamcore 4×5 camera

July 14, 2014 | Photos | By: Mark VandeWettering

Another picture from my foamcore 4×5 camera. Roughly 150mm landscape lens, f/24, 3:50 second exposure onto Ilford Multigrade RC paper, could have probably developed a bit longer, but not bad. Inverted the print in GIMP, but no other tonal adjustment.


Share Button

Ken tinkers with DTL, and SV3ORA’s transistorized 4-bit digital computer made out of discrete DTL

July 9, 2014 | Homebrew CPU, Homebuilt CPUs | By: Mark VandeWettering

Ken stumbled on one of my earlier posts about DTL (diode transistor logic) and was interested enough to do some basic exploration. He reduced the DTL NAND gate to a double diode, a transistor and two resistors. Ken sent me the LTSpice and EagleCAD screen dumps that fit in about .4″ square:


Pretty cool. In an email, Ken goes a bit further:

I’m working towards a bitslice pcb that implements either an ALU or a Program counter. Remarkably their logic is so similar that a single block of logic could be configured to match either requirement. I think I can get it all on to a 4″ x 2″ pcb with a couple of LEDs and a toggle switch on the front edge. Stack 8, 12 or 16 of these together and you have something similar to the PDP-8.

Awesome Ken! I hope to hear more about this when you have some hardware running.

I haven’t even done any real thinking since then, but I went back and tried to find some more information of people building stuff with DTL logic. I’m not sure if I spotted SV3ORA’s 4 bit digital computer before, but rereading it today, it turned out very cool. He constructed the logic on perfboard with just ordinary components. Very nice.

A transistorized 4-bit digital computer made out of discrete DTL

Addendum: Ken also pointed out the NAND to Tetris course in his email, which I believe I may have blogged about before, but which is a great resource for someone seeking to develop a more complete vertical understanding of computers from the ground up. Ken’s addition of actual soldering to the project makes it even cooler.

Share Button

Some example python code to fetch weather forecasts…

July 8, 2014 | My Projects, Python, Web Programming | By: Mark VandeWettering

Need to get some weather information? The website forecast.io has a nice web based service you can use up to one thousand times a day for free. I was thinking of using it for an automated sprinkler application, but just to test it, I wrote this simple chunk of python to try it out. To use it yourself, you’ll need to get your own API key and modify it to use your own latitude and longitude. It’s not that amazing, but you might find it of use.

#!/usr/bin/env python

#   __                        _   
#  / _|___ _ _ ___ __ __ _ __| |_ 
# |  _/ _ \ '_/ -_) _/ _` (_-<  _|
# |_| \___/_| \___\__\__,_/__/\__|
# A python program which used some publically available
# web apis to find out what the forecast will be.

# You'll need an API key below... you get 1000 requests per day for free.
# Go to forecast.io and sign up.


# Your latitude and longitude belong here, I use SF for example
LAT= 37.7833

directions = ["N", "NNE", "ENE", "E", "ESE", "SSE", "S", "SSW", "WSW", "W", "WNW", "NNW"]

def bearing_to_direction(bearing):
    d = 360. / 12.
    return directions[int((bearing+d/2)/d)]
import sys
import os
import time
import optparse
import json

import urllib2

now = time.time()
cached = False

if os.path.exists("WEATHER.cache"):
    f = open("WEATHER.cache")
    parsed = json.loads(f.read())
    if now - parsed["currently"]["time"] < 900:
        cached = True

if cached:
    print "::: Using cached data..."
    print "::: Reloading cache..."
    req = urllib2.Request(URL+API+"/"+("%f,%f"%(LAT,LNG)))
    response = urllib2.urlopen(req)
    parsed = json.loads(response.read())
    f = open("WEATHER.cache", "w")
    f.write(json.dumps(parsed, indent=4, sort_keys=True))
    f.close() ;

c = parsed["currently"]
print ":::", time.strftime("%F %T", time.localtime(c["time"]))
print "::: Conditions:", c["summary"]
print "::: Temperature:", ("%.1f" % c["temperature"])+u"\u00B0"
print "::: Dew Point:", ("%.1f" % c["dewPoint"])+u"\u00B0"
print "::: Humidity:", ("%4.1f%%" % (c["humidity"]*100.))
print "::: Wind:", int(round(c["windSpeed"])), "mph", bearing_to_direction(c["windBearing"])

d = parsed["daily"]["data"][0]
print "::: High:", ("%.1f" % d["temperatureMax"])+u"\u00B0"
print "::: Low:", ("%.1f" % d["temperatureMin"])+u"\u00B0"

d = parsed["hourly"]["data"]

for x in d[:12]:
        print time.strftime("\t%H:%M", time.localtime(x["time"])), x["summary"], ("%.1f" % x["temperature"])+u"\u00B0"
Share Button

Two more pictures from my foamcore 4×5 camera…

July 8, 2014 | My Photos, My Projects, Optics, Photography, Photos | By: Mark VandeWettering

Here are two more photos I took at last night’s camera workshop. I wanted to take something slightly more beautiful than a selfie, so I chose the Luxo statue outside the Steve Jobs building at Pixar, and some white flowers from the garden. Both were taken rather late in the day, under partly cloudy skies using a 4 second exposure on some paper with an ASA value of around 4, and a 4 second exposure (timed by my accurately counting “Mississippis”). Both were shot at f/24. I scanned them using our copy machine at 600dpi, and then inverted them in gimp. I didn’t do any further processing on the Luxo. With the flowers, I adjusted the curve slightly to bring out some details in the darks between the flowers. I saved these versions as JPEGs, click on them to see them full resolution.



If you look to the upper right of the Luxo, you can see that there are some significant off-axis aberrations, as is also apparent in the background of the flowers. But the center of the field is remarkably sharp, considering. I’m rather pleased.

Share Button

An Experimental 4×5 Camera with a ridiculous lens… and a ridiculous selfie

July 2, 2014 | Optics, Photography | By: Mark VandeWettering

Over the years that I’ve been interested in computer graphics and telescopes, I’ve managed to pick up a bit of knowledge about optics in general, and specifically about camera lens design. In the past, I’ve been particularly interested in old cameras and photography, and in a kind of photographic minimalism. But it has remained mostly an academic interest, with no real practical results.

Until recently.

I was recently asked to provide a little bit of background on camera lenses and lens design at an informal workshop. The purpose of the workshop was for each participant to build and use a camera of their own construction. I’ve taken similar courses before where we did pinhole photography. Here’s the apex of that experiment, a picture of my desktop:


Taken with this camera. Note the curved back, which results in the odd panoramic distortion of the previous picture.


But this time class was a bit more ambitious. We were going to make cameras that would shoot on 4×5 film, and use a real lens (or lenses) to give us faster focal ratios and interesting distortions and other effects. We ordered some lenses with focal lengths of around 150mm from Surplus Shed for a few bucks apiece (favoring some positive meniscus lenses, as well as some with about 300mm that we thought we’d experiment with some symmetrical lens arrangements, got some 4×5 sheet film holders, and a pile of black foamcore and gaffer tape. Each person’s camera was a bit different. Here’s mine:

It’s a pair of boxes about 7″ across which telescope together. To create a bit of a light trap, there is both an inner and an outer box in the back, and the section which holds the lens slips in between those two, and also provides a rough focussing mechanism. The lens is a meniscus with about 150mm focal length, and about 50mm in diameter. It’s not an achromat, just a simple lens, configured as a Wollaston landscape lens. 220px-WollastonMeniscus-text.svgI constructed a small box to hold it about 1 inch behind the front of the camera, and then punched a 1/4″ hole in some black paper to serve as a stop. Instead of a true shutter, I decided to just make a little trap door. For our first tests, we were going to image directly onto photographic paper, which had an ASA rating of around 3 or 4. With the 1/4″ stop in place, my camera operates at around f/24. To make my first “selfie” in room light, I guestimated an exposure time of 30 seconds. The first exposure was far too light. I then caved and used a smartphone app to give a better estimate, and it suggested a three minute exposure time. I shot this on ASA 3 positive paper. I triggered the shutter myself, then sat down and tried to be as still as possible. When the time was up I got back up and closed the shutter. Into the darkroom… and bathing in the rinse!

photo 1

I cropped the picture and scanned it, cropped it, did a very tiny exposure tweak to darken it a bit (probably should have left it in the developer a touch longer), and here’s my selfie:


I’ll try to get some new shots next week. But it’s a fun project, I urge anyone to give it a try. These simple lenses are more effective than you would think.

Share Button

Nifty Arduino/AVR hack: measuring VCC

June 20, 2014 | Arduino, Atmel AVR | By: Mark VandeWettering

JNv4Assem_sq-450x450In my previous article pondering sensors for my garden, I shamefully neglected a viable and interesting choice, the JeeNode JeeNode is available here in the U.S. from Modern Device. It’s sold as a kit, which is unusual, but not particularly scary to assemble. It’s just a few through hole parts. It’s a pretty bare bones processor, but does interestingly include a wireless module. It uses the RFM12B modules running on either 433 or 915 Mhz. But what really makes the JeeNodes interesting to me are that they abandoned the (I submit) broken shield model of Arduino shields, and instead group the output pins for the Arduino into a collection of identical Ports which can be accessed using their JeeLib library. Each port consists of six pins (one ground, one Vcc, and four data pins) and all can be used to implement an I2C bus in software to access peripherals. Very cute, and much nicer than the hodge podge of existing code for the Arduinos.

But the website at JeeLabs has a bunch of other cool stuff. Like details on one of their JeeNodes that’s been transmitting data wirelessly for over eight months, powered by just a coin cell. Or this Dive Into JeeNodes which is a tutorial on interfacing JeeNodes to a Raspberry Pi to make a house monitoring system. While the blog isn’t being updated anymore, it includes all sorts of good stuff, including this rather clever article on VCC Measurement (a software only way for an Arduino to determine it’s own input voltage). Great stuff.

Share Button

DIY FPV Micro Quad…

June 20, 2014 | Quadcopter, Radio Controlled Airplanes | By: Mark VandeWettering

Building my full sized quadcopter is going rather slowly (sigh) but in the mean time I picked up a little Hubsan X4 to play with. It’s cheap, and because it has a very low mass, it’s pretty hard to destroy. After more crashes than I can count, I’ve only managed to ding up one propeller (and replacements are pretty cheap and easy to get). But I must admit that one of the reasons I’m interested in quads and RC vehicles is to shoot video from them. While it is possible to get microquads that carry cameras, or even allow FPV, I kind of like the idea of home brewing something. Often, such projects are aided by following in the footsteps of giants, looking at how others have solved problems helps a bunch. It’s also inspiring. That’s why I was particularly enthused to find this article:

Build a micro-sized first-person-view quadcopter

A couple of things I like about the article:

  • It suggested the Vitality H36 quadcopter. It has one really interesting feature: it’s compatible with the Flyky/Turnigy radio transmitters. It would be cool to use my big transmitter with the tiny quad.
  • Provides good hints on the video camera, transmitter and receiver module that you might want to use.
  • Good links to circularly polarized antenna construction details.
  • It’s an existence proof that it can be done! Awesome!

It looks like a complete hoot!

Share Button

Pondering some sensors for my garden…

June 19, 2014 | Arduino | By: Mark VandeWettering

DSC_1941-500x500We’ve started a garden at our house in a pair of raised beds. I’ve been pondering about possibly creating a set of sensors to monitor the dryness of the soil in the beds as well as in the container that I have a dwarf Meyer lemon tree going in. I was trying to figure out what a good sensor would be. Ideally, I want a simple, low power computer, which is fairly cheap and easy to deploy. I’ve got more than a few Arduino variants around (Uno, Fio, RedBoards, Nanodes, and Wildfires) and they have more than enough horsepower to do what I anticipate (reading temperature, humidity and moisture settings) but there are two things that make them less than completely satisfactory to me:

  • They don’t have any kind of wireless link. You could certainly add one of several kinds, but…
  • That adds to an already fairly expensive board. Unos are about twenty five dollars. That seems like a lot. There are simpler boards of course, such as the Pro Mini. Sparkfun has those for about ten dollars.. You can also order Arduinos from Chinese manufacturers to drive the cost down, but a board cost of about ten dollars about the minimum you can get.

It seems to me that a complete node has a minimum cost of around $20 when assembled this way. It’ll include a Pro Mini, a small wireless board, and a couple of cheap sensors. Making the node solar powered would probably cost a few bucks more. But while digging around, I discovered that Low Power Labs had a really cool little board: the Moteino. It’s a very cool little board, offers a choice of different RF choices, and includes versions with and without USB. Do I really need another different Arduino clone?

Sigh. Maybe.

Moteino from Low Power Lab

Share Button

Drone Lunch…

June 19, 2014 | DIY Drones, Quadcopter, Radio Controlled Airplanes | By: Mark VandeWettering

At work, we have an informal group that is interested in drones and quadcopters. Every third Thursday, we get together and fly. Today we went over to Cesar Chavez Park for a bit of flying. I was hoping that I’d have more of my own quadcopter completed, but instead I just observed Mark fly his Bixler and his One Piece Quad, while John flew his Phantom 2 around. They should have some footage up in the next few days. To tide you all over, here is some footage that Jeremy shot on two previous drone lunches, one filmed at work, and the other at the top of Mount Diablo. Enjoy.

Pixar Drone Meet from Jeremy Vickery on Vimeo.

Share Button

Making a simple RC switch…

June 14, 2014 | electronics, LED, My Projects | By: Mark VandeWettering

Over the last couple of years, I’ve spent a little bit of time making fixed wing aircraft from Dollar Tree foam. The results have been interesting and fun, but I’ve found that the need to find relatively large areas to fly means that it’s harder to go fly than I would like. On the other hand, multicopters require relatively less area, and I suspect I could do some test flights almost anywhere. So, over the last few months I’ve begun to accumulate the necessary parts to put one together. As this project continues, I’ll write up some more.

But one of the things I thought about today in between World Cup matches was putting some LED lights on the quadcopter. Besides just looking cool, they are especially useful on quadcopters because they allow you to see the orientation of the quadcopter more easily.

My RC mentor Mark Harrison has some notes about he wired up LEDs on his quadcopter. The basic idea is to get some LED strips, and then power them from a brushed ESC (like this one for $4.95) driven by a spare channel on the RC receiver. It’s a pretty good way to go.

But I wondered if I could do better. I’ve made a Christmas hat with LEDs before, driven by an Atmel ATTiny13, surely I could homebrew something…

So here’s where my thought is: I want to create a small pc board with just a few components. It will be powered from the receiver, and will read the servo control channel, and will use an Atmel ATTiny85 (I have a dozen on hand that I got from Tayda Electronics for about about $1.12 each, and they have 8K of memory, plenty for this application). At it’s simplest, we want the program to configure one pin as an input, and one pin as an output. The servo control signal looks like this:


The pulse will be somewhere between 1ms and 2ms long. The ATTiny just needs to monitor the channel, and if the pulse is longer than 1.5ms, it will set the output high, otherwise set the output low. And that is pretty easy.

In the past, I’ve used avr-gcc to program these tiny chips. For the ATTiny13 with just 1k of flash, that’s pretty understandable. But it turns out that with a bit of work, you can use the Arduino environment to program the ATTiny85s. This website gives most of the details. It’s pretty easy, and I got my MacBook all configured in just a few minutes. Then, I entered and compiled this (as yet untested) code:

 * I thought I would dummy up a simple RC controlled switch using
 * an Atmel ATTINY85 processor, but code it all in the Arduino 
 * environment.  The idea is to use pulseIn() to read the data signal
 * from the RC receiver, and threshold it, setting an output pin
 * if the pulse length is > 1500 ms, and clearing it if less.
 * Very simple.  It should be able to be powered directly from the 
 * receiver, and if we had some kind of FET we can switch large loads
 * (like a strip of LEDS). 
int inputPin = 0 ;
int outputPin = 1 ;
  pinMode(inputPin, INPUT) ;
  pinMode(outputPin, OUTPUT) ;
  // if we never get a pulse, we want to make sure the 
  // ourput stays switched off.
  digitalWrite(outputPin, LOW) ;

    pulseIn(inputPin, HIGH) > 1500 ? HIGH : LOW) ;  

Couldn’t be much easier. If you hooked up an led with a current limiting resistor to pin 1, you can test this (I’ll put up some video once I get it going). To power a larger 12v string (like this red LED string I got for $8.90 for 5m from Amazon) you’ll use that output to power a FET to switch the much larger current, but that’s dead simple too. I can probably run the ATTiny off its internal oscillator at just 1Mhz.

But as cheap as that is, it’s probably not worth the cost of home brewing one.

But of course, you don’t need to stop with this stupid application. You have a total of five or six pins, so you can easily control multiple strings. You can also implement more sophisticated controls: instead of just using an on/off signal, you can look for short switch flips versus long switch flips, and change into different blinking modes. A short flip might make the landing lights blink. A long one turns them on steady. You could even use the ATTiny to drive RGB addressable LEDS (check out this Instructable for details). Different flips might turn on different colors, or switch between Larson scanner modes… blinking… the skies the limit.

I’ll let you know when I get more of this done.

Share Button

Probabalistic Models of Cognition

June 13, 2014 | Computer Graphics, Computer Science | By: Mark VandeWettering

This week began with a visit from Pat Hanrahan, currently a professor at Stanford and formerly at Princeton, where I was lucky enough to meet him. He came by to talk about probabilistic programming languages, which are an interesting topic that he and his students have made some interesting progress in solving difficult problems. I don’t have much to say about it, except that it seemed very cool in a way which I’ve come to expect from Pat: he has a tendency to find interesting cross disciplinary connections and exploit them in ways that seem remarkable. I haven’t had much time this week to think about his stuff more, but he did mention this website which gives examples of probabilistic computation and cognition, which seemed pretty cool. I’m mostly bookmarking it for later consumption.

Share Button