London and Its Environs by Baedeker

April 9, 2014 | Gutenberg Gems | By: Mark VandeWettering

The other day I was watching the 1939 movie The Adventures of Sherlock Holmes, a rather fun film staring Basil Rathbone and Nigel Bruce. Early in the film, the maniacal Professor Moriarty (played with great zest by George Zucco) is menacing his butler Dawes for allowing one of his prize orchids to wither while he was in jail awaiting trial for murder. He laments the injustice of him serving six weeks in jail for murdering a man, while a more suitable punishment for murdering one of his flowers would be to be “flogged, broken on the wheel, drawn and quartered and boiled in oil”.

Great stuff.

londonImmediately after, he takes on of his orchids and presses it meaningfully into a copy of Baedeker’s London and Its Environs. I love old books, so I looked i up on And, of course, they have digitized it, at least the 1901 version. It’s a traveler’s guide, carefully documenting all the kinds of things you might want to know about when visiting London at the turn of the last century. Neat! What’s also cool is that Moriarty puts the orchid at a place where there is a map or diagram on the left side of the book. A few minutes of perusing reveals that it’s a map of the Tower of London, which plays a key role in the story. Nice bit of foreshadowing! thetower Digging around, it kind of makes me wish that I had a copy, and as luck would have it you can get a digital version of the 1899 edition for your kindle. It also appears you can get facimiles of original Bradshaw railway guides for about the same. If I ever return to working on my Sherlock Holmes story, I’ll have some useful references.

Share Button

Hellschreiber Update…

April 7, 2014 | Amateur Radio | By: Mark VandeWettering

Scott Haley mentioned my old Hellduino post on Facebook: a simple project that used an oscillator powered by an Arduino to send Hellschreiber, a kind of simple fax mode invented by Rudolf Hell in the 1920s. I did this mainly as a simple test, inspired by Steve Weber, KD1JV’s “temp2morse” project. But unfortunately, that page seems to be gone, so the schematic isn’t available. It’s not a huge deal: almost any Colpitt’s oscillator would do in its place, with the main power rail being powered by a digital output on the Arduino, but I thought I’d see if I could find one suitable. I’ve built this low power oscillator documented by Hans Summers before, it’s probably overkill (it’s meant to drive a 50ohm antenna, and actually radiate some single digits worth of milliwatts). K7MTG’s HF Thermometer project was the inspiration for Steve’s, so is probably a good place to start. If you look at his schematic, you’ll see it has no antenna, and no power amplifier. It is actually a bit more sophisticated than my first test circuit was: L1 and C3 form a tuned circuit, which probably makes the waveform a bit more sine-like (if you look at my video, you’ll see the waveform isn’t ideal). To convert this circuit to send Hellschreiber is just a question of software, since Hellschreiber is (like Morse) just sending dots at the right time.

Addendum: Jeff Kellem was nice enough to do some heavy lifting and find a copy of the original schematic for Weber’s temp2morse project:


He found it on LB3HC’s blog. The original article (without images) is archived via the Internet Wayback Machine: click here to enter time vortex.

Share Button

Digital ATV resources…

April 1, 2014 | Amateur Radio, Amateur Satellite | By: Mark VandeWettering

I’ve been doing a bunch of reading about digital ATV operations lately. I was originally motivated by hearing about the HamTV project aboard the ISS. Back in 2007, I got re-energized into ham radio by learning that for the 50th anniversary of Sputnik, the amateur satellite AO-51 would broadcast a cool message that I heard with a simple HT. I’m wondering if I’m having that kind of a moment now: the idea of creating a station to broadcast digital television seems challenging but doable.

While reading up on the necessary bits for a HamTV downlink station, I found that this little satellite receiver which sells for less than twenty-five dollars could be used to decode signals from the ISS. It receives DVB-S signals, which are used by direct satellite broadcasters like Dish Network. But in thinking about how to go forward with the project, it seemed to me like trying to aim directly for a satellite downlink station was likely to be a frustrating endeavor. It requires a number of different subsystem to work together, but trying to receive the DVB-S from the ISS (given an intermittent schedule) would be difficult to test together. So, I started looking for resources that I could use to build a similar terrestrial station, including both a transmitter and receiver.

A couple of cool links:

The DATV-Express board is a $300 exciter board that is in limited production. It seems very cool. Reading the Tech Talks on this site yielded a lot of good information, I’m particularly pondering the information in this one, about designing a digital TV station.

Another similar project, but available more as a board/kit is the Digilite project. An interesting variation of this project is the DigiliteZL project, which makes for a compact and interesting setup.

I also like the CQ-DATV magazine. It’s got lots of cool information, published in a series of magazines available in a variety of e-book formats. They also have a broad collection of interesting articles on the Digilite project, which I’m currently reading over.

I’ll probably stick to more experimentation with SSTV, but this stuff fascinates me, and I may have the opportunity to do something interesting with it in the future.

Share Button

On early camera lenses…

March 21, 2014 | Cameras, Optics | By: Mark VandeWettering

I like it when my life’s experience and interests toss me an opportunity, and out of the blue last week I received an invitation to help with a class a colleague is trying to put together to help people build their own cameras, and he wondered if I could give an hour or so introduction into camera lens design. It’s really odd that I know anything about camera lens design, but when I was really into building telescopes, I acquired and read a fair number of books on optics and cameras, and in my job it’s proven occasionally useful. (I even managed to be a coinventor of a light field motion picture camera.) But really, it’s always been pretty much just a hobby.

Which isn’t to say it isn’t fun, and interesting, and an opportunity to build some cool stuff.

The history of camera lens design is actually pretty nifty, and goes back over two centuries, even predating the invention of film (early lenses were used in camera obscura). I remember reading (and subsequently forgetting a great deal) of the history of the camera lens in a borrowed copy of Kingslake’s History of the Photographic Lens (a great work, I should someday purchase a copy of my own). But I do have copy’s of Conrady’s Applied Optics and Optical Design. This book was written in 1922, and detailed the mathematical design methods used to design a variety of optical instruments. In particular, I recalled a particular design consisting of a “stop” in front of a simple, concave forward positive meniscus lens. I couldn’t recall the name, but a few minutes of Googling reminded me that it was called the Wollaston landscape lens.220px-WollastonMeniscus-text.svgThe lens is, well, just lens and a stop, but can yield surprisingly good images. The simplicity also makes it a great lens for experimenting with simple primitive cameras. The lens is typically mounted in a barrel that accepts cards with different size holes for the stop, about 15% of the focal length in front of the meniscus. When the lens is stopped down to about f/16, the performance can be quite good over fields of about 45 degrees or so. Conrady’s book covers the design of such a lens, and tells you exactly how to optimize the system, but frankly it probably doesn’t matter that much. I’ll probably review that material, but I doubt doing any math is called for in this class. I suspect we’ll just select some roughly appropriate lenses from Surplus Shed and have at it.

A former Pixarian and colleague, Craig Kolb (along with Don Mitchell and Pat Hanrahan), did a really nice paper back in his Stanford days entitled A Realistic Camera Model for Computer Graphics which showed how you could simulate more complex camera lenses which have many subtle effects not usually captured by the simple pinhole projection model used by most rendering software. I can’t remember if I reviewed the paper for SIGGRAPH, or if I just talked to him about it, but I always thought it would be cool to try to simulate one of these simple camera lenses and show how the “defects” of these simple lenses could be appropriately simulated in CG. He never did it, and neither did I. It still remains on my list of projects to do.

One good thing about looking at these old designs is that a lot of information can be had from materials which you can get for free online. A bit of googling revealed this nifty little book (digitized by Google and available on Google Play) which has a lot of good information about the Wollaston landscape lens, and other simple lenses of the day. It’s conveniently out of copyright, so free to all.

Bolas and Brown’s The Lens

Hopefully more on this to come.

Addendum: Bolas and Brown’s book is not without it’s problems: I saw this diagram while reading it, and realized that it’s not accurate. Off axis parallel rays should be focused, well, off axis, this diagram shows them coming to focus on the optical axis. Whoops!

Share Button

Some continuing short bits on SSTV….

March 16, 2014 | Amateur Radio, Raspberry Pi, SSTV | By: Mark VandeWettering

Nothing too exciting going on, but minor bits of code and play have been done, so I thought I’d update.

First of all, there is a program for decoding SSTV on the Pi, called QSSTV. I don’t have a proper sound setup on the Pi yet, so I couldn’t test it live on the air, but I did take one of my pregenerated Martin 1 images and asked it to decode, which it did quite well:


Not bad at all. While investigating qsstv’s capabilities, I discovered that the latest 8.x versions suppert digital SSTV. Well, except it isn’t built into the qsstv version (my guess is that the Pi doesn’t have quite enough oomph to do the necessary math in real time). But that’s pretty cool: I’ll have to check that out sometime soon.

But anyway…

I also coded up a Scotty 1 encoder, so now I have encoders for Martin 1, Scotty 1, Robot36 and Robot72 modes. I found this great book online which had many details about the different modes. It was quite helpful. It actually documents the modes a lot better than the ARRL Image Communications Handbook and is, well, free. Awesome.

One question I’ve been interested in for a while is “which mode is best?” Of course, we have to define what we mean by “best”. After all, Robot36 sends an image in half the time of Robot72, and about one quarter the time as Martin M1. My question was: how much better image can we expect from Martin, given that it takes 4x as long. Another question was “how much bandwidth does each mode use?” In the ARRL Image Communications Handbook, they have a formula which computes bandwidth but it didn’t make a great deal of sense to me.

I don’t know how to precisely answer either of these, but I thought I’d write some code to simply compute the power spectra of a bunch of some sample sstv recordings. So I did. It basically just loads the sound samples from the SSTV file, window it (I used the Blackman-Nutall window, for no real reason) runs an FFT (using the fftw3 library) and compute the power spectrum. It’s pretty easy. I then encoded a simple color bar image in three different modes, and graphed them all up using gnuplot.


Staring at it, well, they don’t seem that different really. I should figure out the frequency bounds that (say) cover 85% of the total energy, but just eyeballing it, it doesn’t seem that bad.

I also did some minor tweaking to add in additive white Gaussian noise, but I haven’t gotten that entirely working so I could do an apple to apples comparison of how each modes does in total power at various levels of noise. And I’m looking for an HF path simulator too.

That’s about it for now. Stay tuned for more.

Share Button

Analyzing an SSTV recording…

March 12, 2014 | SSTV | By: Mark VandeWettering

Inspired by this webpage, I decided to write a simple zero-crossing analyzer, just like his. The code turns out to be remarkably simple, and would allow me to reverse engineer modes that aren’t adequately documented. I called this program “analyze”:

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <sndfile.h>

main(int argc, char *argv[])
    SNDFILE *sf ;
    SF_INFO sfinfo ;
    float *inp ;
    float t, dt ;
    float x ;
    int i ;
    float pcross=0, cross, freq ;

    if ((sf = sf_open(argv[1], SFM_READ, &sfinfo)) == NULL) {
        perror(argv[1]) ;
        exit(1) ;
    fprintf(stderr, "%s: %d channel%s\n", argv[1], sfinfo.channels, 
                sfinfo.channels > 1 ? "s" : "") ;
    fprintf(stderr, "%s: %dHz\n", argv[1], sfinfo.samplerate) ;
    fprintf(stderr, "%s: %lld samples\n", argv[1], sfinfo.frames) ;

    inp = (float *) calloc(sfinfo.frames, sizeof(float)) ;
    fprintf(stderr, "::: reading %lld frames\n", sfinfo.frames) ;
    sf_read_float(sf, inp, sfinfo.frames) ;
    sf_close(sf) ;

    dt = 1.0 / sfinfo.samplerate ;

    for (i=0, t=0; i<sfinfo.frames-1; i++, t+=dt) {
        if (inp[i]*inp[i+1] < 0) {
            /* we have a zero crossing */
            x = -inp[i] / (inp[i+1]-inp[i]) ;
            cross = t + x * dt ;
            freq = 1.0 / (2 * (cross - pcross)) ;
            printf("%f %f\n", cross, freq) ;
            pcross = cross ;

The code is dead simple. It loads the sound file into memory and then figures out (via linear interpolation) the location of all the places where the signal crosses zero (via simple linear interpolation). Each time we have a zero crossing, we compute the frequency of the signal, which is just the reciprocal of twice the difference in the two latest crossing times. This program dumps out the time and frequency of each zero crossing, which you can then easily visualize with gnuplot. Like this:


Next step: generate some example sound files, and use them to reverse engineer some of the less well documented modes.

Share Button

A brief introduction into color spaces, as used in SSTV…

March 11, 2014 | SSTV | By: Mark VandeWettering

Rob (AK6L) was interested in my recent experiments in slow scan television, but didn’t know much about color spaces. It’s an interesting topic on many fronts, and I thought I’d write a brief post about it here to explain it to those who may not be familiar.

Consider this nice 320×240 test image of Wall-E that I’ve been using:


Most of you probably know that these images are really combinations of images in three different colors: red, green and blue. If you take a magnifying glass and look at your TV, you’ll see that your television displays images as a combination of glowing red, green and blue dots. If we instead split this color image into separate images, one for red, one for green, and one for blue, and display each one separately, we can see the image in a different way:


One thing to note: there is lots of detail in each of the three sub-images. That means that there is considerable redundancy. When data streams have lots of redundancy, that means there is an opportunity for compression. Compression means we can send data more quickly and more efficiently.

So, how can we do this? We transform the RGB images we have into a different set of three images, where most of the visual information is concentrated in one channel. That means we can spend most of our time sending the dominant channel, and less effort sending the other channels, maybe even sending lower resolution versions of those channels.

But how do we do that? Well, let’s do some magic, for each pixel in the image, let’s compute a new image Y from the R, G, and B images. Y will consist of 30% of R, 59% of G and 11% of B. This computes a representative black and white image from the R, G, and B channels. (If you didn’t know a lot about color science, you might just try averaging R, G, and B, but your eyes have different sensitivity to R, G, and B light. If you use the proportions I describe, you’ll get a lot better subjective match to the value of each pixel.) Then, let’s compute two additional channels, the channel that consists of R – Y, and the channel that consists of B – Y.

If you are mathematically inclined, you’ll see that this process is invertable: no information is actually lost. Given the Y, R-Y and B-Y images, we can recover the RGB images. But what do these images look like?


(Since R-Y and B-Y may be negative, we actually compute (R-Y)/2 + 0.5 and similar for B-Y).

Neat! Now, the most detail is confined into the Y image. In the Robot 36 SSTV mode, each scanline spends 88ms transmitting the 320 pixels for the Y channel. The R-Y and B-Y channel are first downsampled (resized down) to just 160×120 (half size in both dimensions). Robot 36 takes just 44ms to send each of those. But because we’ve resized down in Y, we only have half as many scanlines in R-Y and B-Y channels. So Robot 36 operates by sending one 320 pixel row for Y, then one 160 pixel row for R-Y, then the next 320 pixel row for Y, then one 160 pixel row for B-Y. Each pixel in the R-Y and B-Y channels will then cover 4 output pixels.

I’ve glossed over a lot of details, but that’s basically how color spaces work: we convert an image into an equivalent representation, and then transmit some channels at lower resolution or lower fidelity than the others. This idea also underlies image compression technology like JPEG.

Addendum: I generated the images above using gimp. If you go to the Colors…Decompose menu, you can bust images into three different RGB images, or YCrCb.

Share Button

Additional Experiments with SSTV, with some ideas….

March 9, 2014 | Amateur Radio, Raspberry Pi, SSTV | By: Mark VandeWettering

Previously, I had written an encoder for the Robot 36 SSTV mode. I chose this for a simple reason: it appears to be the most common mode used in downlinks from satellites, such as the ARISSat-1. It’s not a bad choice, and presents reasonable quality in just 36 seconds.

Today, I decided that I should probably go ahead and implement another of the “Robot” modes, specifically Robot 72. It transmits images with the same resolution (320×240) as Robot 36, but with a bit better quality, and I suspect a bit better fidelity. Both modes transform the RGB colors of the original into a different color space with a luminance channel (usually labeled Y for Ylluminance) and the color encoded in a R-Y and a B-Y channel. To speed transmission, Robot 36 downsamples the last two channels into half resolution images in both dimensions (it really only sends a 160×120 image in those channels). Robot 72 does a similar thing, but only downsamples in the horizontal direction, sending R-Y and B-Y in 160×240.

It wasn’t too hard to modify my Robot 36 code to transmit Robot 72. For fun, I set it up and tested it. It works! Sending the resulting file to my Macbook and decoding with Multiscan 3B, I got:


(The image has been expanded by 2, to 640×480, which makes it look a bit soft)

So, anyway, I was thinking about where to take this idea a bit further. I want to create a project that others can duplicate and expand upon, and that maybe promote the SSTV in a way that is amusing and fun. I wanted to build upon the work I’ve done so far, but take it further, and make it into a project that others might want to duplicate.

What I envision is a small box, consisting of a Raspberry Pi, a Raspberry Pi Camera, and a PiTFT display, together with a USB sound card like this one. (You need a USB sound card because while the Pi does have sound output, it doesn’t have sound input.) Add a microphone and a speaker. This collection will be interfaced with a radio: let’s assume for the moment an amateur radio like a the little Baofeng BF-888S radio I’ve been playing with. Add some buttons for interface.

Here’s what I’m imagining as the use case: it’s an interface to your HT. You could talk, and have it relayed to the radio. You could listen to the radio through the speaker. But you can also click a different button, and it will capture and send an image via SSTV. And if it hears an SSTV image, it will decode it and display it on the TFT display. I’ll probably initially support some of the low resolution black and white modes as well as the Robot 36 and Robot72 modes. I can also imagine a keyboard interface that will allow you to add text to your live images and send it out as well. The fastest, lowest resolution BW modes are just 160×120, and transmit in just 8 seconds. With an 8×8 character matrix, you can send the equivalent of a tweet (about 120 characters) in one image.

To make this work, I’ll have to work on a demodulator. So that’s the next step. Stay tuned.

Share Button

SSTV travels through the Ether! A minor success!

March 8, 2014 | Amateur Radio, Raspberry Pi, SSTV | By: Mark VandeWettering

So, this morning I played around a bit more with my Raspberry Pi code to try to see if I could make an SSTV beacon. The idea was to use two existing bits of code, raspistill and my own SSTV encoder (robot36), and glue them together with a small bit of Python. The code uses raspistill to snap a 320×240 image, a bit of the Python Imaging Library to add some text, then my own robot36 encoder to convert that to a sound file. The Pi would then play the sound file, which would be piped into my $17 BF-888S transmitter, which was set into VOX mode, which means that when it hears a signal, it begins to transmit. For this test, I used it in the low power setting, transmitting on the 70cm calling frequency.

To receive, I fired up my trusty FT-817, which was then piped into my Windows laptop running the classic MMSSTV software. At first, I tried using the laptop mic to just listen to the sound played on the 817, but the results were less than stellar. I finally found the right cable to do a direct connect, set the levels appropriately, and voila (I doubled the image size for easier viewing):


Not bad! Total distance: maybe 35 feet or so (blocked by two walls). After I was done, I realized that I actually don’t have an antenna hooked to my FT-817, so I suspect much greater ranges are capable. The BF-888S is basically operating as an FRS radio here (in fact, the BF-888S can be programmed to act operate on FRS frequencies) so even if you don’t have an amateur radio license, you could probably build a similar setup without a lot of hassle.


Share Button

The Baofeng BF-888S as an SSTV beacon?

March 7, 2014 | Amateur Radio | By: Mark VandeWettering

28-020-101Yesterday’s musings about SSTV using the Raspberry Pi has me thinking about creating a little SSTV beacon using the super-inexpensive (less than twenty dollars with charger) BF-888S HT from Baofeng. It’s hard to imagine a cheaper HT than this: it doesn’t even have a display. It has 16 channels, and announces which channel you are on with an (English or Chinese) voice. I used the open-source and free program CHIRP to program this with a set of useful frequencies in the FRS and amateur bands, and it seems to work pretty well.

But could I use it to make an SSTV beacon on UHF?

Seems pretty straightforward. I would just need a little bit of interface between the Pi and the BF-888S. Luckily, the Baofeng does seem to support VOX mode, so in principle just using a little 3.5mm jack should work just fine, but I think I’ll go to the trouble of adding an isolation transformer, a potentiometer to set the levels (probably just a little trim pot) and an AC blocking cap. In theory then I’d just need to play the wav file out, the VOX would pick up the sound and start transmitting. Voila!

One small bummer: the BF-888S does not have an external power jack. If you were going to install this in a permanent location, you’d probably have to rig up a 3.7v power supply to feed in through the battery terminals. Perhaps a good opportunity to 3D print something!

To make a fully functioning beacon, I think you just need to combine the “raspistill” program which can do frame grabs and save them as JPEGS with my “robot36″ code which will convert them to wave files, and glue them together with some Python code. A rough prototype could probably be hacked together in an hour. Seems like fun!

Stay tuned.

Addendum: Here’s a link to the BF-888S on Amazon. $17.69! If you add a remote mic and the programming cable, it’ll set you back $31.34. You can find an attempt a the manual here. Many functions are enabled/disabled by holding down the MONI and PTT buttons while turning it on. For instance, tuning to channels 1-5 and doing so sets the VOX on or OFF (and sets the sensitivity, I think, more experimentation to come.)

Share Button

Some thoughts on SSTV and the Raspberry Pi…

March 6, 2014 | Amateur Radio, Raspberry Pi | By: Mark VandeWettering

Screen Shot 2014-03-06 at 9.55.29 PMToday I found an interesting Instructable on running SSTV on the Raspberry Pi. It uses an interesting bit of software which uses the Pi to directly generate an FM signal. Strictly speaking, I doubt this is a great idea without some outboard harmonic filtering, but it’s cool that it could be done.

I recalled that a while ago I wrote an encoder for the Robot36 SSTV mode. I wondered how efficient it was: could it be used to construct a nice Raspberry Pi SSTV beacon? I transferred it over, installed the necessary dependencies (the jpeg library and libsndfile1) and timed it. Eek. 18 seconds to encode image. That seemed excessive, so I set about figuring out why it was slow.

It didn’t take me to long to discover that the vast majority of time was spent in the libsndfile library. That was in no small part because I used it to write individual floating point samples, one at a time. I hypothesized that if I buffered up a bunch of samples, it would be better. So, I coded it up quickly, and voila: it now can decode a jpeg and create the resulting wav file in just 1.878 seconds. Awesome. Playing the wav file back into Multiscan (an OS-X SSTV program) resulted in just the image I wanted.

It should be pretty easy to modify this script to read directly from the Raspberry Pi camera and send it directly to the sound card. A little bit of interfacing to an HT, and I should have an SSTV beacon ready to go. Stay tuned.

Share Button

Puppet Making, and Mathematics…

March 2, 2014 | Arts and Crafts, Puppets | By: Mark VandeWettering

I’ve been taking a puppet making class, and I must admit, it’s been a lot of fun. Too much fun, in fact. I’ve been thinking about puppet making, watching puppet making videos, and scouring the web for inspiration and guidance.

To date, I’ve only completed one puppet (his name is Gerfil, and he may still acquire eyes), so I have a lot to learn, but it’s fascinating.

Gerfil is a pretty ordinary hand puppet, but has a shape which is more refined than many of the simple arts and crafts patterns you can see on line. But I got fascinated by the process of pattern making. Digging around, I discovered this nifty video.

I rather liked the spherical head, but thought that it was a little odd that he disassembled a baseball cap to generate the pattern. The mathematician in me thought that it must be possible to generate a similar pattern using, well, mathematics. I began by considering a couple of basic design criteria. The top of the head would be a hemisphere, divided into six identical gussets. I decided to settle on an eight inch diameter head, which means that the total circumference is about 25 1/8 inches around. Since all six gussets are identical, it makes the base of each triangular gusset a little over 4 inches across the base. I set the height to be the circumference divided by four, or a little over six inches. But then the question became how to draw the curve connecting the base to the apex of this triangle.

I decided to test a few ideas, and then print them out on paper to see how well they worked. I suspect that the ideas I used to generate this pattern are simply not quite right, but they aren’t hugely off. The sides are defined by a simple quadratic bezier, set to establish the tangent at the base to be a right angle, and the apex of the triangle to have a sixty degree angle.

The result looked like:


Cutting them out, and taping them together resulted in this:

photo (4)

It’s not bad, but it’s not great either. I suspect that if it was rendered in foam, it would be a bit better, as the foam is both compressible and stretchable. I suspect that to really do the job right, I’d need to simulate that more directly, and use real optimization techniques. But I did this while watching the Oscars (I’ve got 11 right so far) which means I can’t concentrate enough to pull off harder math. I’ll think about it more and try again in the next few days.

Addendum: Here’s a nice puppet pattern with some construction tips.

Share Button

Products of Primes and the Primorial Function…

February 26, 2014 | Math | By: Mark VandeWettering

A friend of mine was working on a programming exercise, and it turns it out was based on a chunk of math which I thought I should have seen before, but either have not seen or have forgotten. It’s basically that the products of all primes less than some number n is less than or equal to en and in the limit converges to precisely en. First of all, I wrote a chunk of code to test it out, at least for primes less than a million. Here’s the code I wrote (I swiped my rather bizarre looking prime sieve code from a previous experiment with different sieving algorithms):

from random import randint
from math import sqrt, ceil, floor, log
import time
import sys
from math import *

def sieveOfErat(end):
    if end < 2: return []

    #The array doesn't need to include even numbers
    lng = ((end/2)-1+end%2)
    # Create array and assume all numbers in array are prime
    sieve = [True]*(lng+1)

    # In the following code, you're going to see some funky
    # bit shifting and stuff, this is just transforming i and j
    # so that they represent the proper elements in the array
    # Only go up to square root of the end
    for i in range(int(sqrt(end)) >> 1):
        # Skip numbers that aren't marked as prime
        if not sieve[i]: continue

        # Unmark all multiples of i, starting at i**2
        for j in range( (i*(i + 3) << 1) + 3, lng, (i << 1) + 3):
            sieve[j] = False
    # Don't forget 2!
    primes = [2]

    # Gather all the primes into a list, leaving out the composite numbers
    primes.extend([(i << 1) + 3 for i in range(lng) if sieve[i]])

    return primes

sum = 0. 
for p in sieveOfErat(1000000):
        sum += log(p)
        print p, sum/p

Most of the code is just the sieve. In the end, instead of taking a very large product, we instead take the logarithm of both sides. This means that the sum of the logs should be nearly equal to n. The program prints out the value of the prime, and how sum compares to the value of p. Here’s a quick graph of the results:


Note, it’s not monotonically increasing, but it does appear to be converging. You can run this for higher and higher values and it does appear to be converging to 1.

This seems like a rather remarkable thing to me. The relationship between e and primes seems (to me) completely unobvious, I wouldn’t have any idea how to go about proving such a thing. A quick search on Wikipedia reveals this page on the primorial function but similarly gives little insight. Recalling Stirling’s approximation for ordinary factorials suggests that these large products are related to exponentials (Stirling’s approximation not only has a factor of e in it, but also the square root of two times pi as well), but the idea that the product of primes would precisely mimic powers of e seems deeply mysterious…

Any math geniuses out there care to point me at a (hopefully simple) explanation of why this might be? Or is the explanation far from simple?

Share Button

The Minima — A General Coverage Transceiver

February 15, 2014 | Amateur Radio | By: Mark VandeWettering

A while ago, Bill Meara from Soldersmoke brought Ashar Farhan’s new design, the Minima to my attention. The Minima is a general coverage transceiver which has a lot of cool features. It’s a superhet design which is Arduino based (actually, it incorporates a bare bones Arduino, which is little more than an Atmel ATMega328 chip.) Farhan is the designer of the popular BitX design, and this design has a lot of cool features, and yet seems rather straightforward.

Some versions of this are beginning to appear in the wild. Mark, G0MGX seems to have done the best at documenting his build on his blog. Here’s his video demonstrating the receiver:

Share Button

Raspberry Pi Camera NoIR…

February 10, 2014 | Raspberry Pi | By: Mark VandeWettering

I’ve been playing around with the Raspberry Pi Camera for a number of different purposes, but one thing is pretty apparent right off: while the quality overall is quite good, it’s not very good in low light. Because at least part of my potential application is watching the night-time activities of wildlife (most likely my cat, but perhaps including foxes that cruse around yard) I decided to order the version of the Raspberry Pi Camera which had no IR blocking filter, called the Raspberry Pi NoIR. It arrived today, and at the same time I ordered an inexpensive IR illuminator to serve as a light source. Addendum: The illuminator died after less than 12 hours of use. Do not buy this one. It’s rubbish.

It arrived today!

Out with the old camera, in with the new, power on the illuminator (if you order the same one, note that it does not come with a wall-wart to power it) and voila:

Screen Shot 2014-02-10 at 8.35.00 PM

Scrappy Cam!

Okay, a couple of quick notes. The illuminator is just not that strong. Here, the illuminator was a little under five feet from the the couch. For stuff that is more distant, it’s clear that the illuminator just isn’t good enough to reach into the corners of the room. Check out…

Screen Shot 2014-02-10 at 8.33.32 PM

You can see me standing to the side. Obviously, the color balance is all wonky, it’s going from magenta to purple. The frame rate is still quite low, which in my streaming application manifests itself as a pretty long delay. Still, seems pretty cool! More experiments soon…

Share Button