iPhone 4S Disappoints? What the hell is wrong with you people?

October 7, 2011 | Rants and Raves | By: Mark VandeWettering

I’ve had this rant percolating in my head for the last few days, and can’t let it carry into the weekend, so you all with have to bear with me.

First, the caveat: I’m a happy iPhone/iPad/MacBook user. These products are (for me) so clearly better than the products from other manufacturers that they replaced that they have become a part of my daily life. I always have my iPhone. I almost always am carrying my iPad. I do the bulk of my recreational programming on my Macbook. That being said, I’m not relentlessly tied to the upgrade cycle: my MacBook is an old white one (so old in fact the latest Mac OS X release can’t be installed on it). I have a first gen iPad, and will probably not upgrade until the next generation comes out. I have an iPhone 4, but partly upgraded so I could cycle my old one to my sibling.

And I recognize that other products might be preferred by other people. People have specific reasons for picking the products they like. Many people refused to buy iPhones when they were tied to AT&T, and AT&T customer service sucks. It wasn’t ever clear to me that any other cellular carrier had better service, but who am I to argue? Some people like the (quasi-) open source model of Android. Or the cost of phones. Or whatever features float your boat. If you bought an Android phone and are happy with it, who am I to argue? Enjoy.

But I’ve read an astonishingly large number of reviews about the iPhone 4S intro lately which make the claim that the phone is “disappointing”. And I simply don’t get it. Here is one to check out:

Why Apple’s iPhone 4S Disappoints.

I’ll quote:

The iPhone 4S is hugely disappointing.

Let me repeat: Apple’s new iPhone 4S — with the fastest processor in a smartphone by miles, perhaps the most advanced and smartest voice command assistant on a piece of consumer technology ever, and the basic design and feel of the most wildly popular and beloved cell phone of all time — is a big fat, tremendous letdown of a device, and the event where Apple CEO Tim Cook announced the new iPhone was the Al Capone’s vault of product launches.

This boggles my mind: the author is conceding that it’s fast and pretty, with cool voice recognition and a great camera, but it’s disappointing?

What the hell is wrong with you people?

The phone delivers a bunch of cool features. The voice recognition technology seems very cool to me, integrated in a way we haven’t really seen in consumer applications. The camera is shoots 1080p video and has image stabilization. And of course it folds in all the features that my iPhone 4 will get next week as part of the iOS 5 upgrade,

It’s a terrific upgrade.

Am I going to pay the carrier-mandated extortion to go out and upgrade? No, to be honest, I probably am not. But to label this release as disappointing?

What the hell is wrong with you people?

Many commenters seem to ask “where is the pizzazz?” To me that seems like going into a restaurant, ordering a meal and then asking why they didn’t add more sugar. We all like sugar, after all. Or maybe we should put gold leaf on our food, just so we can crap out precious metals!

The implicit (or often explicit) claim that all this criticism seems to have is that the people who buy Apple products do so merely to show off their wealth. That the iPhone is bling, without any substance. That it doesn’t matter what actual features anything Apple has: the fanboys will dash out to buy them anyway.

The truth is actually exactly the reverse. People buy them because they love them. People love them because they are good. It is pure cynicism to presume that someone who chooses an Apple product does so purely on vanity.

The iPhone and iPad are great (but by no means perfect) products. In my opinion, they are simply take more care and provide better service for their products than alternatives. If that ever ceases to be the case, then I’ll take my business elsewhere. Thus far, I haven’t seen the better product.

Scrappy introduces my Arduino/Gameduino Satellite Tracker

October 7, 2011 | Amateur Radio, Amateur Satellite, Arduino, Gameduino, My Projects | By: Mark VandeWettering

My cat Scrappy decided it was time to film a brief progress video of my Arduino/Gameduino satellite tracker. I completed the basic port and testing of my Plan13 implementation to C++ for the Arduino, and got it running pretty well. It doesn’t seem to be much more compact than Bruce Robertson’s qrpTracker code, but it isn’t any worse, and I like the way I modularized and call it a bit better. Eventually all this code will be released on this website for anyone crazy enough to want to build one for themselves.



More progress as it occurs.

First ever image of fourth-order rainbow

October 6, 2011 | Amateur Science, Science | By: Mark VandeWettering

Long time readers of my blog may remember that I’m interested in rainbows (not unicorns, just rainbows). A while ago, I wrote a simple simulation that showed the formation of the primary and secondary rainbows by simulating the refraction of water inside a single raindrop. These two bows appear opposite the sun in the sky. But I never thought to try to simulate the higher order rainbows (caused by greater numbers of refractions inside water drops) because, well, I didn’t think that they existed.

But they do. And this week the first known photos of them appeared.

Short Sharp Science: First ever image of fourth-order rainbow.

Unlike the primary and secondary bows, these are actually in the direction of the sun. Very cool. I bet that I could modify my old rainbow tracer to make pictures of these things. Perhaps if I get a few moments this weekend.

Steve Jobs, 1955-2011

October 6, 2011 | Commentary | By: Mark VandeWettering

Yesterday, I was sitting at my desk at Pixar when a Twitter alerted me to the passing of Steve Jobs. I remember feeling mostly shock. While I knew that he was ill, and this was the likely outcome from his stepping down as the Apple, I still couldn’t help but feel a bit amazed that such a vibrant individual could be taken from us.

I didn’t know Steve on any kind of a personal level, but he’s had a direct influence on my life. In 1991, I left my job at Princeton University to join Pixar and work on their RenderMan project. I flew out from Newark to SFO, and was met by my manager, Mickey Mantle (no, not that Mickey Mantle). While collecting my baggage, he informed me that Steve had that very day laid off 30 employees (mostly having to do with the struggling hardware side of Pixar’s business), and that they had considered laying me off as well, but felt that it was unfair, so had decided to keep me on working on Pixar’s RenderMan software. Thus, I narrowly avoided the Sword of Damocles, and began my twenty year (and counting) career at Pixar.

Perhaps understandably, I tried to avoid the direct attention of Steve Jobs. I did, after all, narrowly avoid being laid off on my first day, so perhaps that wasn’t as cowardly as it appears now. And it was certainly true that for all his genius, Steve didn’t always exhibit the greatest of people skills (a flaw which I’ve come to recognize more in myself, and therefore forgive in others more as I get older). But I got to experience the “reality distortion field” on numerous occasions, and grew to appreciate his vision and his passion. His charisma, his ability to captivate audiences with his vision of the future, and his ability to create companies (both Pixar and Apple) which were great organizations which create great products speak to his great talent and particular genius.

This morning, Carmen reminded me of a story. This must have been shortly after we began dating, maybe in early 1995. She had checked out a book (she remembers it as The Mac Almanac) that gave the history of Apple, and she showed me some line which read something like “.. and then Steve Jobs left Apple, never to be heard from again”. She remembers that I grabbed the book from her, crossed out the line and wrote in “WRONG!” in bold letters. She was shocked that I would deface a library book, but I stand by the editorialization. At the time, Steve had founded NeXT and Pixar, and from my sideline vantage point, was poised to do amazing things.

I never could have imagined how amazing they would turn out to be.

Steve firmly believed that the way to change the world was to make great products, whether they were computers or films. He also believed that the best products took the best people doing their best work. At Pixar, I’ve been lucky to interact with some of the greatest talents in the film and computer graphics industry, and I credit Steve (alongside Ed Catmull and John Lasseter) for helping to create this atmosphere of excellence which still inspires me to wake up and come to work every day.

Early Facebook alum Jeff Hammerbacher’s was quoted after leaving Facebook “The best minds of my generation are thinking about how to make people click ads. That sucks.” This expresses the (in my opinion, totally deserved) cynicism about the business models of the information technology sector. They think that you are their product: they are selling your eyes and attention to others to make money. This reminded me of the sales pitch that Steve gave to John Sculley to lure him away from Pepsi-Cola. Steve asked Sculley if he wanted to “sell sugar water for the rest of your life, or come with me and change the world?”

Apple still believes that you are customers, and not product. They make great products so you’ll pay money to be their customers. Great products are really the best business model. If more business leaders believed that, the world would be a better place.

I don’t mean to lionize Steve. There is an annoying tendency to laud the rich and powerful, glossing over their faults, lauding achievements which fell in their laps through luck or circumstance, and overt emotionality when no personal link ties their lives to yours.

But Steve was “the real deal”. Innovator, visionary, and industrialist without peer.

To his family and friends, my condolences on your loss.

To the rest of the world: dare to be different. Steve changed the world. What are you doing today?

Addendum: His great 2005 commencement address at Stanford helps convey a bit of his humanity and his skill at speaking. Great stuff.



More progress on the Arduino/Gameduino satellite tracker…

October 5, 2011 | Amateur Radio, Amateur Satellite, Arduino | By: Mark VandeWettering

Okay, I got about half of the Plan 13 code ported to C++. It’s a fresh port of the original BASIC code, but modularized into objects better, and with a few bits of tidiness that C++ provides over basic. I estimate another hour or so to finish the code, if I work carefully and efficiently.

I wasn’t feeling up to working carefully and efficiently this evening, so I thought I’d try to figure out a few mostly cosmetic issues. I had the idea that I wanted to change the map to color, which was easy enough to achieve. Then, I thought a large time display at the bottom. I had an 8×8 character font already in memory, so using it to generate huge characters at the bottom didn’t seem to be that hard. The problem is that the screen is only 50 characters wide, so that makes just six characters. I dummied it up to display six digits of time and the bottom, and it’s simply too big. Check it:

Clearly, the 8×8 character set isn’t quite what I need. I have a 3×5 charset which might do the trick, but I’ll have to get that formatted and loaded into flash. I’ll try that tomorrow.

High-Low Tech – Paper Speakers

October 4, 2011 | electronics, Music | By: Mark VandeWettering

Another cool link in my blog searching: constructing speakers out of paper and strips of copper foil.

High-Low Tech – Paper Speakers.



See How It Flies

October 4, 2011 | Link of the Day | By: Mark VandeWettering

It seems that lots of people I know have been working on radio and computer controlled drone aircraft. I just recently found Mark Harrison’s blog, and he’s got a bunch of cool things. For instance, he linked to John Decker’s See How It Flies, an online textbook on the principles of flight. Very cool stuff.

EastBay RC: See How It Flies.

ISS tracking on the Arduino/Gameduino

October 4, 2011 | Amateur Radio, Amateur Satellite, My Projects | By: Mark VandeWettering

Well, tonight I had some mild success! My Gameduino satellite tracker is up and running! It’s not got much in the way of a user interface, but it here you see the ISS position marked with a purple/magenta dot, and then dots showing the position of the ISS every three minutes for the next two hours. My own location is indicated by the small star.

Compare this to the identical time using the tracker from the AMSAT website:

It seems to match up pretty well. I had originally planned to port my own code for Plan13 that I wrote in Python, but I am currently using VE9QRP’s Plan13 qrpTracker for the Arduino. It works pretty darned well, which significantly blunts my enthusiasm for porting code. We shall see.

Addendum: I’m planning on adding some interface elements to allow you to switch between multiple satellites, maybe a rotary encoder + push button. And I have some software changes in mind.

Creating graphics for the Gameduino…

October 1, 2011 | Gameduino, My Projects | By: Mark VandeWettering

I had a project in mind for the Gameduino, part of which requires the display of a world map. But the Gameduino has a relatively limited amount of memory, and the “background” graphics is character mapped: instead of providing complete flexibility to plot individual points, the Gameduino memory is organized as a 64×64 array of 8 bit memory, each specifying a single 8×8 character. Thus, to make a “map”, we need to generate a character set and then build the resulting image out of those characters.

I began with a 400×200 map that I shrunk down from the image I got from this page and converted to a simple black and white image. I then tried to see how many unique 8×8 tiles there were: in this case, 342 unique tiles were needed to reproduce the image exactly.

But I don’t need to reproduce the image exactly, I just want as close an image as I can find, encoded with as few characters as I can find. I suspect that if I wanted to think about this hard, I could figure out a way to use some fairly strong bit of math to find a good solution. The problem would seem to be a binary vector quantization problem: each tile can be viewed as a 64 element binary vector. The problem is to find a set of 64 bit code words that approximate the distribution of codewords from the image tiles.

But of course I am lazy. When confronted with problems like this, I like to use techniques like simulated annealing to solve them. In fact, I coded up a pretty straightforward hill climbing algorithm. It simply takes a subset of (say) 128 tiles, and sees how closely it can approximate the image using those tiles. In each iteration, it swaps one of the tiles out for a different one, and keeps the new mapping if it lowers the number of error bits. A proper simulated annealing schedule would probably help, but even as slow and inefficient a scheme as it is, it still does a good job.

Here’s an animated gif that flips between the target and the image that I discovered that can be encoded with 128 characters. That still leaves 128 characters to use for, well, alphabets and numbers and the like. Looks pretty good.

I might work a bit more on this, to see if I can get a small bit better, but it’s useable for now.

Addendum: I added a few things that made it a bit better. The previous best map had about 650 bits different. I implemented a simple annealing schedule in my optimizer, and allowed it to use tiles which were not already in the existing pool by simply modifying tiles with single bit changes. This resulted in an image which has only 460 bits different. It’s annoying that it created a new “island” off the east coast of South America, but it’s still pretty good.

Addendum2: Oops, the reason this worked out better is that it uses 192 characters instead of 128. I was playing with different settings as I made changes. 192 characters is the max I could use in this application. That leaves 64 codes, which nicely covers the ASCII codes from 32 (space) to 95 (underscore) and includes all the capital letters and numbers.

Homemade GPS Receiver

October 1, 2011 | Amateur Radio, Amateur Satellite | By: Mark VandeWettering

This article was linked from hackaday, and seems very, very cool. Sure, GPS receivers are cheap, but building one is cool. I am not likely to be doing a project like this, but it’s cool to read about.

Homemade GPS Receiver.

Apologies to Ken and Eric re: SSTV Challenge…

September 29, 2011 | Amateur Radio, SSTV | By: Mark VandeWettering

My face is red. I had claimed yesterday that nobody had tried to decode my SSTV challenge, when in fact both Ken and Eric decoded it. Eric was the first, who sent me this decode:

It’s a bit noisy, because he just played the sound file on his laptop and decoded it on a PC using the microphone inputs and running MMSSTV.

Eric followed about forty minutes later with his decode, again using MMSSTV.

Pretty neat.

I didn’t think that MMSSTV had the 8s mode in place, but sure enough, if you right click on one of the mode buttons, you can select an 8s B/W mode. It lists it as 160×120, which is keeping more in line with modern aspect ratios, and explains why my face looks squished. I will have to dig into the code a bit to see what it would take to make an encoder/decoder which is entirely compatible with what MMSSTV expects.

Thanks again, Ken and Eric, and apologies for not seeing your notes.

An impractical classic SSTV decoder…

September 28, 2011 | Amateur Radio, SSTV | By: Mark VandeWettering

A few days ago, I posted a .WAV file for a classic 8s SSTV image and asked if anyone could decode it. Nobody replied (I wasn’t surprised) so I set about writing my own demodulator.

Since I’m inherently lazy, here was my idea: generate the complex signal using my previously debugged Hilbert transform code. Then, for every adjacent pair of complex numbers, determine the angle between the adjacent samples by normalizing the numbers (so they have unit length) and then computing the dot product between the two. If you’ve done as much raytracing as I have, you know that the dot product of two vectors is product of their lengths and the cosine of the angle between them. So, you take the dot product, pass that through the acos function, and you’ll get an angle between 0 and $latex \frac{\pi}{2}$. You can then multiply that by the sample rate (and divide through by $latex 2\pi), and you can recover the frequency.

So, that’s what I did. If the frequency was in the band between 1500 and 2300 hz, I mapped that into a grayscale value. For values which were below 1500, I set the pixels to black. For values > 2300, I set the pixels to white. I generated one output pixel for every input sample. That means the image is too wide, but that’s no big deal: I resized it to 120 columns wide using some of the netpbm utilities (command line graphics programs).

And, here’s the image:

That’s a bit teeny, so let’s blow it up to 512×512:

Not too bad for a first try. Some thoughts: the sync pulses seem really wide to me. The spec says the sync pulses should be 5ms wide, which means they are about 7.5% as the line length. That’s a lot of the image to leave behind. I’ll have to work on that. I’m not currently doing anything to detect the vertical or horizontal sync pulses, but I’ve done that before with my NOAA weather satellite code, so I don’t anticipate any real problems. All in all, not a bad first try.

Addendum: The large image shows some JPEG compression artifacts since I expanded it from the small JPEG image without trying to save it at high quality. I can do better.

Addendum2: I decreased the overall filter size to length 11. It didn’t work very well, because the original file was at 22050, which pushes makes the portion of the spectrum we are interested in down into the region of the filter reponse which isn’t very uniform. But if we resample the sound file down to 8Khz, we filter works just fine with just 11 taps. Check it out:

The image is slightly skewed because the line length is no longer an integer number of samples, and the accumulated error causes the sync to drift slightly. A real implementation would track this accurately.

Difficulties with the Hilbert Transform…

September 28, 2011 | Amateur Radio, Math, My Projects | By: Mark VandeWettering

Well, it wasn’t so much a difficulty with the Hilbert transform as a difficulty with my understanding. But with the help of my good friend Tom, my understanding was soon put right, and I thought it might make an interesting (in other words, horribly boring to anyone but myself) post, and at the very least, it would be good for me to document this little bit of mathematical fun.

Warning: complex numbers ahead!

This came up as a subtask of my slow scan television decoder. I’ll probably have a more detailed posting regarding the underlying theory, but the basic idea is as follows. The “pure” (without noise) SSTV signal is a frequency modulated signal with constant amplitude. But what does that really mean? After all, the instantaneous value of the signal varies: it’s a sine wave after all. What’s useful is to think of the analytic signal. Instead of being a single value at each moment in time, we think of a signal as a complex number, with a real and imaginary part. Let’s say the real part is $latex a$, and the imaginary part is $latex b$. The analytic signal can be represented as the complex number $latex a+b\imath$. The amplitude of the signal is then the $latex \sqrt{a^2+b^2}$.

If doesn’t make any sense to you, don’t fret. It’s mostly because we do a horrible job of teaching complex numbers, and I haven’t really done any better. You might be thinking, “what does this have to do with actual signals? The signals that I have are described by a single value, which varies over time. Why are you hurting my brain with complex numbers?”

In an attempt to stimulate you to think about it more, I’ll respond with the Socratic method, and as “what does the amplitude of a signal mean if they are described by only a single value?” In particular, how can you tell if a signal is “constant amplitude”? What does that even mean?

Okay, I’ll drop that topic for another time. Let’s just imagine that you accept my definition of constant amplitude, and we want to construct a complex signal from the real valued signal we receive from the radio. Let’s just pretend that the input signal gets copied into the real portion of the complex signal. In other words, the put values become the values for $latex a$ in our $latex a+b\imath$ above. For the purposes of our discussion, let’s say that the signal is some sine wave, $latex sin(t)$, where $latex t$ is a time variable (we could scale it to make it higher and lower frequency if we like). Let’s say the we’d like the amplitude of that signal is 1. Can we find values for $latex b$ that makes the amplitude $latex \sqrt{a^2+b^2} = 1$? If we square both sides, we see that $latex a^2 + b^2 = 1$, and since $latex a = sin(t)$, we see that $latex b^2 = 1-sin^2(t)$. If you remember any of your high school trig, you might remember that make $latex b = cos(t)$ ($latex sin^2(t) + cos^2(t) = 1$).

What this means is that for any given sine wave at a given frequency, we can get the complex version of the signal by copying a cosine signal of the same frequency into the imaginary part. If you think of cosine as just a phase with a 90 degree phase shift, you’ll see that if we had a magic filter that could make a 90 degree phase shift, you could easily construct the complex signal from the real signal.

But what if our signal isn’t made up of sine waves? Well, you can decompose the signal into the sum of a bunch of different sines (and cosines) of varying amplitude (proof left for the motivated reader). Applying the magic 90 degree phase shifter to all the sines, and then adding them back up, we’ll get the imaginary part of the signal we need.

None of this has anything to do with the problem really, it’s just background. It’s remarkably hard to explain this stuff to anyone who doesn’t understand it already, and I fear I’ve not done anything good. Oh well, onto my problem.

The way that you can implement the 90 degree phase shifter is by using some magic called the Hilbert transform, which can be implemented as an FIR filter. I wanted to create a black box that I feed values of my real signal in, and get (after some delay) the complex signal out. The code isn’t all that hard really, I’ll go ahead and put it here:

[sourcecode lang=”C”]
#include <stdio.h>
#include <stdlib.h>
#include <complex.h>
#include <math.h>

#define NZEROS (50)

static float xv[NZEROS+1];
static float yv[NZEROS+1];

static float xcoeffs[NZEROS+1] ;

double
window(int n, int N)
{
double a0 = 0.355768 ;
double a1 = 0.487396 ;
double a2 = 0.144232 ;
double a3 = 0.012604 ;

return a0 – a1 * cos(2.0 * M_PI * n / (N – 1))
+ a2 * cos(4.0 * M_PI * n / (N – 1))
– a3 * cos(6.0 * M_PI * n / (N – 1)) ;
}

void
oddhilb (float *hilb, int n)
{
float f;
int i, j, k, m;

m = j = k = (n – 1) >> 1;

for (i = 1; i <= m; i += 2) {
f = 2.0 / (i * M_PI) ;

hilb[j++] = 0.0;
hilb[j++] = f;
hilb[k–] = 0.0;
hilb[k–] = -f;
}
/* and now… window it…
*/
FILE *fp = fopen("unwindowed.dat", "w") ;
for (i=0; i<=n; i++)
fprintf(fp, "%lf\n", hilb[i]) ;
fclose(fp) ;

for (i=0; i<=n; i++) {
hilb[i] *= window(i, n+1) ;
}

fp = fopen("windowed.dat", "w") ;
for (i=0; i<=n; i++)
fprintf(fp, "%lf\n", hilb[i]) ;
fclose(fp) ;
}

complex
filterit(float val)
{
float sum;
int i;

for (i = 0; i < NZEROS; i++) {
xv[i] = xv[i+1];
yv[i] = yv[i+1];
}
xv[NZEROS] = val ;

for (i = 0, sum = 0.; i <= NZEROS; i++)
sum += (xcoeffs[i] * xv[i]);
yv[NZEROS] = sum ;
return xv[NZEROS/2] + I * yv[NZEROS] ;
}

main(int argc, char *argv[])
{
float f ;
complex c ;
int i ;
double t = 0. ;

oddhilb(xcoeffs, NZEROS+1) ;

for (i=0; i<4000; i++) {
f = sin(t) ;
t += 0.5 ;
c = filterit(f) ;
printf("%lf %lf\n", creal(c), cimag(c)) ;
// printf("%lf\n", cabs(c)) ;
}

}
[/sourcecode]

The program implements the Hilbert transform as an FIR filter. At each step, we compute a new value $latex f$ for our input signal. We then pass it to our filter stage, which returns our complex valued signal. If we then plot the real and imaginary parts of our signal, we should get a circle. And, in fact we do, now:

But when I started, I didn’t. The radius of the circle varied over quite a bit (maybe 10% or so). The reason was I forgot to apply the windowing function to the Hilbert transform filter coefficients. Plotting the two sets of coefficients, we see that the filtered versions fall to zero faster away from the middle.

We can use the FFT on these coefficients to see what the frequency response is.

You can see that the unwindowed filter has very uneven response over the range of frequencies, while the windowed signal is nice and flat over all but the region very near zero and the Nyquist frequency. This means that when we feed it nice sine waves, we’ll get a nice analytic signal with very constant amplitude.

Addendum(s): There is a small “off by one” error that adds an extra zero coefficient to the filters above. I’ll tidy it up a bit more when I get a chance. As Tom pointed out, for use in my SSTV decoder, it’s almost certain that just 11 coefficients would work just fine (and be faster). We could also optimize out the multiply by zeros in the convolution.

Classic Black and White SSTV timings?

September 26, 2011 | Amateur Radio, SSTV | By: Mark VandeWettering

I was trying to determine the exact timings for the classic “8 second” black and white SSTV mode. Copthorne MacDonald suggests 15 lines per second, to make 120 lines in 8 seconds. The vertical sync pulse has a duration of 30ms, and the horizontal sync pulse duration is just 5ms. The sync frequency is 1200Hz, and black and white interpolate between 1500Hz and 2300Hz. The aspect ratio is 1:1 (to match the usual oscilloscope tubes of the day).

So, I wrote a program to generate my best guess from this basic description, and here’s an example sound file:

An 8 second “classic” SSTV sound file, recorded as WAV File (22050Hz)

Here’s a spectrogram of the first 0.5 seconds or so, showing the vertical sync pulse, followed by a few scanlines:

I decided that the horizontal sync pulse should go in the final 5ms of each scanline (somewhat arbitrarilly).

So, the question of the hour is: can any modern software decode this basic image? I’ll be working on my own implementation, but I’m curious. I’ll leave it as a bit of a puzzle: if someone emails me the decoded image, you’ll be immortalized in these pages and have my thanks. Otherwise, you’ll just have to wait to see what the picture actually looks like in a day or so.

“Classic” Black & White SSTV…

September 25, 2011 | Amateur Radio, SSTV | By: Mark VandeWettering

I haven’t had much time for actual experimentation, but I have spent some time researching and some more time thinking about how to properly implement and test analog SSTV modulators and demodulators. I haven’t made much actual progress, but I thought I’d document some of the information that I’ve discovered and my current thoughts.

First of all, I began by trying to discover a bit of the history. Luckily, I’m an ARRL member, so I have access to all of the back issues of QST magazine. In this case, one only has to search for the author “Copthorne MacDonald” to find the 1958-1961 papers on the first amateur uses of slow scan television. I’m still perusing them a bit, but in the first paper, MacDonald suggested using a sub-carrier amplitude modulated signal which would be incompatible with virtually all SSTV modes used today, but in the 1961 he proposed the “classic” black and white, 8 second mode, consisting of 120 lines, frequency modulating between 1500 and 2300 Hz. These numbers were apparently chosen mostly for compatibility with existing telephone based fax standards of the day, but also nicely fit within the reasonably uniform region of most voice based transceivers in use for amateur radio.

This kind of “classic” black and white SSTV was apparently fairly common during the 1970s.

Here is some classic SSTV, re-rendered from a cassette recording done in 1980, documenting a QSO between KG4I in Birmingham, AL and N2BJW in Elmira, NY.

Here’s a clever way to say “Happy Trails” from AC5D:

Here’s a more modern implementation, which perhaps demonstrates a wider variety of imagery (a lot of it supermodels), still in the classic 120×120 black and white format:

Why bother with this simple mode? Well, just because it is simple. I also like the speed: the immediacy of the mode is kind of cool. I’m working on good, solid, easy to understand code to do this mode, and will post about my experiments in the presence of noise when I get some results.