Category Archives: SSTV

SSTV from the ISS…

Well, it’s not pretty, but I was just using a 17″ whip antenna on my VX-8GR, recorded it with Audacity, and then decoded it with MultiScan on my Macbook. The first bit of the recording is pretty rocky, so I had to start the sync myself. I’ve bean meaning to do some experiments with bad audio and sync recovery, now I have more data.

Oh, in case this was all gibberish to you, the Russians have been running “events” from the International Space Station to honor their cosmonauts by transmitting pictures via slow scan television (SSTV). I received this picture using what most people would call a walkie talkie, a whip antenna, and a laptop.

As decoded by Multiscan:

iss

I thought a second image would have begun later in the pass, but didn’t hear it.

I think an antenna with a little more gain, and/or a preamplifier would help a lot. You really need pretty noise free audio to make a good picture. Still, a fun experiment. I might try the 12:30AM pass tonight.

Addendum: The second pass was also a little rocky. Got the tail end of one transmission fairly cleanly, but the three minute gap to the next one meant it was low. This is what I got.

second pass

Slow Scan Television from the ISS this weekend…

Note: This post was adapted by an email that I sent out to our ham radio club.

If anyone is interested in a fun little ham radio related activitytonight, you can try to receive slow scan television from the International Space Station this weekend. I haven’t done this in a while,but I think I’ll give it a try and see what I can come up with.

You can read about this event here:

AMSAT UK on the upcoming ISS event

They will be on 145.800Mhz (in the 2m band).

The way I usually “work” these is to use one of my HTs. A better antenna than the stock one is usually good (a longer whip, or even a yagi) but you might just see what you can here with the stock antenna. The ISS transmits with 25 watts of power, which is usually pretty easy to hear. I have a set of earphones that I hook with a splitter. One half goes to my earbuds, the other to a small digital audio recorder I have. Turn the squelch on your radio off so you can here the signal when it is weak. You may find that moving your antenna round will help a bit, so monitor with your earphones. Don’t be shocked if you don’t hear the ISS right at the rise time: it has 3 minutes of dead time between transmissions, which take about 3 minutes to send. It sounds a bit like a ticking of a clock, with a whistle in between, if you click this link, you can hear what it sounds like:

I like to record the audio, then play it back into my Windows PC and use the MMSSTV program, but you can actually go completely low tech and try an inexpensive iphone app, held up to the speaker of your HT. I use

Black Cat System’s SSTV program for the iPhone/Ipad

which works okay, not amazing. If you are out doors in a windy or noisy location, your image won’t be as good this way: the bg noise will cause interference.

To help out, I computed a set of rise/set/max elevation tables centered on San Francisco. If you live close, you can probably use these times. If you live in other parts of the country, you might try looking at the Heaven’s Above website. Select “Passes to include” to be all, and enter your location in the upper right. The table below was calculated by my own software.

--------------------------------------------------------------------------------
Rise time           Azi    Max Elev Time        Elev  Set time             Azi
--------------------------------------------------------------------------------
2015/04/11 16:24:33 178.90 2015/04/11 16:28:52   9.27 2015/04/11 16:33:10  74.10 (Local Time)
2015/04/11 23:24:34 178.90 2015/04/11 23:28:52   9.27 2015/04/11 23:33:11  74.10 (UTC)

2015/04/11 17:59:18 232.14 2015/04/11 18:04:47  76.70 2015/04/11 18:10:17  49.52 (Local Time) [1]
2015/04/12 00:59:18 232.14 2015/04/12 01:04:48  76.70 2015/04/12 01:10:17  49.52 (UTC)

2015/04/11 19:36:48 276.47 2015/04/11 19:41:38  13.93 2015/04/11 19:46:28  40.34 (Local Time)
2015/04/12 02:36:48 276.47 2015/04/12 02:41:38  13.93 2015/04/12 02:46:28  40.34 (UTC)

2015/04/11 21:15:06 309.66 2015/04/11 21:19:13   7.29 2015/04/11 21:23:21  47.92 (Local Time)
2015/04/12 04:15:06 309.66 2015/04/12 04:19:14   7.29 2015/04/12 04:23:21  47.92 (UTC)

2015/04/11 22:52:10 319.85 2015/04/11 22:56:52  12.34 2015/04/11 23:01:34  78.97 (Local Time) [2]
2015/04/12 05:52:10 319.85 2015/04/12 05:56:53  12.34 2015/04/12 06:01:35  78.97 (UTC)

2015/04/12 00:28:22 312.09 2015/04/12 00:33:48  58.58 2015/04/12 00:39:14 122.75 (Local Time) [3]
2015/04/12 07:28:22 312.09 2015/04/12 07:33:49  58.58 2015/04/12 07:39:15 122.75 (UTC)

2015/04/12 02:05:15 289.69 2015/04/12 02:09:49  11.95 2015/04/12 02:14:23 174.60 (Local Time)
2015/04/12 09:05:16 289.69 2015/04/12 09:09:50  11.95 2015/04/12 09:14:24 174.60 (UTC)

[1] Probably the easiest pass, the ISS passes almost straight overhead,
should be loud and easy.

[2] A low night time pass, but the ISS should be visible to the naked eye.

[3] Another night time pass, but too late for the ISS to catch any
sun. 58 degrees is a good pass, the second one.

If I get any good images, I’ll send them out next week.

Some continuing short bits on SSTV….

Nothing too exciting going on, but minor bits of code and play have been done, so I thought I’d update.

First of all, there is a program for decoding SSTV on the Pi, called QSSTV. I don’t have a proper sound setup on the Pi yet, so I couldn’t test it live on the air, but I did take one of my pregenerated Martin 1 images and asked it to decode, which it did quite well:

shot

Not bad at all. While investigating qsstv’s capabilities, I discovered that the latest 8.x versions suppert digital SSTV. Well, except it isn’t built into the qsstv version (my guess is that the Pi doesn’t have quite enough oomph to do the necessary math in real time). But that’s pretty cool: I’ll have to check that out sometime soon.

But anyway…

I also coded up a Scotty 1 encoder, so now I have encoders for Martin 1, Scotty 1, Robot36 and Robot72 modes. I found this great book online which had many details about the different modes. It was quite helpful. It actually documents the modes a lot better than the ARRL Image Communications Handbook and is, well, free. Awesome.

One question I’ve been interested in for a while is “which mode is best?” Of course, we have to define what we mean by “best”. After all, Robot36 sends an image in half the time of Robot72, and about one quarter the time as Martin M1. My question was: how much better image can we expect from Martin, given that it takes 4x as long. Another question was “how much bandwidth does each mode use?” In the ARRL Image Communications Handbook, they have a formula which computes bandwidth but it didn’t make a great deal of sense to me.

I don’t know how to precisely answer either of these, but I thought I’d write some code to simply compute the power spectra of a bunch of some sample sstv recordings. So I did. It basically just loads the sound samples from the SSTV file, window it (I used the Blackman-Nutall window, for no real reason) runs an FFT (using the fftw3 library) and compute the power spectrum. It’s pretty easy. I then encoded a simple color bar image in three different modes, and graphed them all up using gnuplot.

spec

Staring at it, well, they don’t seem that different really. I should figure out the frequency bounds that (say) cover 85% of the total energy, but just eyeballing it, it doesn’t seem that bad.

I also did some minor tweaking to add in additive white Gaussian noise, but I haven’t gotten that entirely working so I could do an apple to apples comparison of how each modes does in total power at various levels of noise. And I’m looking for an HF path simulator too.

That’s about it for now. Stay tuned for more.

Analyzing an SSTV recording…

Inspired by this webpage, I decided to write a simple zero-crossing analyzer, just like his. The code turns out to be remarkably simple, and would allow me to reverse engineer modes that aren’t adequately documented. I called this program “analyze”:

[sourcecode lang=”C”]
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <sndfile.h>

int
main(int argc, char *argv[])
{
SNDFILE *sf ;
SF_INFO sfinfo ;
float *inp ;
float t, dt ;
float x ;
int i ;
float pcross=0, cross, freq ;

if ((sf = sf_open(argv[1], SFM_READ, &sfinfo)) == NULL) {
perror(argv[1]) ;
exit(1) ;
}

fprintf(stderr, "%s: %d channel%s\n", argv[1], sfinfo.channels,
sfinfo.channels > 1 ? "s" : "") ;
fprintf(stderr, "%s: %dHz\n", argv[1], sfinfo.samplerate) ;
fprintf(stderr, "%s: %lld samples\n", argv[1], sfinfo.frames) ;

inp = (float *) calloc(sfinfo.frames, sizeof(float)) ;
fprintf(stderr, "::: reading %lld frames\n", sfinfo.frames) ;
sf_read_float(sf, inp, sfinfo.frames) ;
sf_close(sf) ;

dt = 1.0 / sfinfo.samplerate ;

for (i=0, t=0; i<sfinfo.frames-1; i++, t+=dt) {
if (inp[i]*inp[i+1] < 0) {
/* we have a zero crossing */
x = -inp[i] / (inp[i+1]-inp[i]) ;
cross = t + x * dt ;
freq = 1.0 / (2 * (cross – pcross)) ;
printf("%f %f\n", cross, freq) ;
pcross = cross ;
}
}
}
[/sourcecode]

The code is dead simple. It loads the sound file into memory and then figures out (via linear interpolation) the location of all the places where the signal crosses zero (via simple linear interpolation). Each time we have a zero crossing, we compute the frequency of the signal, which is just the reciprocal of twice the difference in the two latest crossing times. This program dumps out the time and frequency of each zero crossing, which you can then easily visualize with gnuplot. Like this:

test

Next step: generate some example sound files, and use them to reverse engineer some of the less well documented modes.

A brief introduction into color spaces, as used in SSTV…

Rob (AK6L) was interested in my recent experiments in slow scan television, but didn’t know much about color spaces. It’s an interesting topic on many fronts, and I thought I’d write a brief post about it here to explain it to those who may not be familiar.

Consider this nice 320×240 test image of Wall-E that I’ve been using:

wall-e

Most of you probably know that these images are really combinations of images in three different colors: red, green and blue. If you take a magnifying glass and look at your TV, you’ll see that your television displays images as a combination of glowing red, green and blue dots. If we instead split this color image into separate images, one for red, one for green, and one for blue, and display each one separately, we can see the image in a different way:

RGB

One thing to note: there is lots of detail in each of the three sub-images. That means that there is considerable redundancy. When data streams have lots of redundancy, that means there is an opportunity for compression. Compression means we can send data more quickly and more efficiently.

So, how can we do this? We transform the RGB images we have into a different set of three images, where most of the visual information is concentrated in one channel. That means we can spend most of our time sending the dominant channel, and less effort sending the other channels, maybe even sending lower resolution versions of those channels.

But how do we do that? Well, let’s do some magic, for each pixel in the image, let’s compute a new image Y from the R, G, and B images. Y will consist of 30% of R, 59% of G and 11% of B. This computes a representative black and white image from the R, G, and B channels. (If you didn’t know a lot about color science, you might just try averaging R, G, and B, but your eyes have different sensitivity to R, G, and B light. If you use the proportions I describe, you’ll get a lot better subjective match to the value of each pixel.) Then, let’s compute two additional channels, the channel that consists of R – Y, and the channel that consists of B – Y.

If you are mathematically inclined, you’ll see that this process is invertable: no information is actually lost. Given the Y, R-Y and B-Y images, we can recover the RGB images. But what do these images look like?

YCrCb

(Since R-Y and B-Y may be negative, we actually compute (R-Y)/2 + 0.5 and similar for B-Y).

Neat! Now, the most detail is confined into the Y image. In the Robot 36 SSTV mode, each scanline spends 88ms transmitting the 320 pixels for the Y channel. The R-Y and B-Y channel are first downsampled (resized down) to just 160×120 (half size in both dimensions). Robot 36 takes just 44ms to send each of those. But because we’ve resized down in Y, we only have half as many scanlines in R-Y and B-Y channels. So Robot 36 operates by sending one 320 pixel row for Y, then one 160 pixel row for R-Y, then the next 320 pixel row for Y, then one 160 pixel row for B-Y. Each pixel in the R-Y and B-Y channels will then cover 4 output pixels.

I’ve glossed over a lot of details, but that’s basically how color spaces work: we convert an image into an equivalent representation, and then transmit some channels at lower resolution or lower fidelity than the others. This idea also underlies image compression technology like JPEG.

Addendum: I generated the images above using gimp. If you go to the Colors…Decompose menu, you can bust images into three different RGB images, or YCrCb.

Additional Experiments with SSTV, with some ideas….

Previously, I had written an encoder for the Robot 36 SSTV mode. I chose this for a simple reason: it appears to be the most common mode used in downlinks from satellites, such as the ARISSat-1. It’s not a bad choice, and presents reasonable quality in just 36 seconds.

Today, I decided that I should probably go ahead and implement another of the “Robot” modes, specifically Robot 72. It transmits images with the same resolution (320×240) as Robot 36, but with a bit better quality, and I suspect a bit better fidelity. Both modes transform the RGB colors of the original into a different color space with a luminance channel (usually labeled Y for Ylluminance) and the color encoded in a R-Y and a B-Y channel. To speed transmission, Robot 36 downsamples the last two channels into half resolution images in both dimensions (it really only sends a 160×120 image in those channels). Robot 72 does a similar thing, but only downsamples in the horizontal direction, sending R-Y and B-Y in 160×240.

It wasn’t too hard to modify my Robot 36 code to transmit Robot 72. For fun, I set it up and tested it. It works! Sending the resulting file to my Macbook and decoding with Multiscan 3B, I got:

wall

(The image has been expanded by 2, to 640×480, which makes it look a bit soft)

So, anyway, I was thinking about where to take this idea a bit further. I want to create a project that others can duplicate and expand upon, and that maybe promote the SSTV in a way that is amusing and fun. I wanted to build upon the work I’ve done so far, but take it further, and make it into a project that others might want to duplicate.

What I envision is a small box, consisting of a Raspberry Pi, a Raspberry Pi Camera, and a PiTFT display, together with a USB sound card like this one. (You need a USB sound card because while the Pi does have sound output, it doesn’t have sound input.) Add a microphone and a speaker. This collection will be interfaced with a radio: let’s assume for the moment an amateur radio like a the little Baofeng BF-888S radio I’ve been playing with. Add some buttons for interface.

Here’s what I’m imagining as the use case: it’s an interface to your HT. You could talk, and have it relayed to the radio. You could listen to the radio through the speaker. But you can also click a different button, and it will capture and send an image via SSTV. And if it hears an SSTV image, it will decode it and display it on the TFT display. I’ll probably initially support some of the low resolution black and white modes as well as the Robot 36 and Robot72 modes. I can also imagine a keyboard interface that will allow you to add text to your live images and send it out as well. The fastest, lowest resolution BW modes are just 160×120, and transmit in just 8 seconds. With an 8×8 character matrix, you can send the equivalent of a tweet (about 120 characters) in one image.

To make this work, I’ll have to work on a demodulator. So that’s the next step. Stay tuned.

SSTV travels through the Ether! A minor success!

So, this morning I played around a bit more with my Raspberry Pi code to try to see if I could make an SSTV beacon. The idea was to use two existing bits of code, raspistill and my own SSTV encoder (robot36), and glue them together with a small bit of Python. The code uses raspistill to snap a 320×240 image, a bit of the Python Imaging Library to add some text, then my own robot36 encoder to convert that to a sound file. The Pi would then play the sound file, which would be piped into my $17 BF-888S transmitter, which was set into VOX mode, which means that when it hears a signal, it begins to transmit. For this test, I used it in the low power setting, transmitting on the 70cm calling frequency.

To receive, I fired up my trusty FT-817, which was then piped into my Windows laptop running the classic MMSSTV software. At first, I tried using the laptop mic to just listen to the sound played on the 817, but the results were less than stellar. I finally found the right cable to do a direct connect, set the levels appropriately, and voila (I doubled the image size for easier viewing):

cqk6hx

Not bad! Total distance: maybe 35 feet or so (blocked by two walls). After I was done, I realized that I actually don’t have an antenna hooked to my FT-817, so I suspect much greater ranges are capable. The BF-888S is basically operating as an FRS radio here (in fact, the BF-888S can be programmed to act operate on FRS frequencies) so even if you don’t have an amateur radio license, you could probably build a similar setup without a lot of hassle.

Fun.

Apologies to Ken and Eric re: SSTV Challenge…

My face is red. I had claimed yesterday that nobody had tried to decode my SSTV challenge, when in fact both Ken and Eric decoded it. Eric was the first, who sent me this decode:

It’s a bit noisy, because he just played the sound file on his laptop and decoded it on a PC using the microphone inputs and running MMSSTV.

Eric followed about forty minutes later with his decode, again using MMSSTV.

Pretty neat.

I didn’t think that MMSSTV had the 8s mode in place, but sure enough, if you right click on one of the mode buttons, you can select an 8s B/W mode. It lists it as 160×120, which is keeping more in line with modern aspect ratios, and explains why my face looks squished. I will have to dig into the code a bit to see what it would take to make an encoder/decoder which is entirely compatible with what MMSSTV expects.

Thanks again, Ken and Eric, and apologies for not seeing your notes.

An impractical classic SSTV decoder…

A few days ago, I posted a .WAV file for a classic 8s SSTV image and asked if anyone could decode it. Nobody replied (I wasn’t surprised) so I set about writing my own demodulator.

Since I’m inherently lazy, here was my idea: generate the complex signal using my previously debugged Hilbert transform code. Then, for every adjacent pair of complex numbers, determine the angle between the adjacent samples by normalizing the numbers (so they have unit length) and then computing the dot product between the two. If you’ve done as much raytracing as I have, you know that the dot product of two vectors is product of their lengths and the cosine of the angle between them. So, you take the dot product, pass that through the acos function, and you’ll get an angle between 0 and $latex \frac{\pi}{2}$. You can then multiply that by the sample rate (and divide through by $latex 2\pi), and you can recover the frequency.

So, that’s what I did. If the frequency was in the band between 1500 and 2300 hz, I mapped that into a grayscale value. For values which were below 1500, I set the pixels to black. For values > 2300, I set the pixels to white. I generated one output pixel for every input sample. That means the image is too wide, but that’s no big deal: I resized it to 120 columns wide using some of the netpbm utilities (command line graphics programs).

And, here’s the image:

That’s a bit teeny, so let’s blow it up to 512×512:

Not too bad for a first try. Some thoughts: the sync pulses seem really wide to me. The spec says the sync pulses should be 5ms wide, which means they are about 7.5% as the line length. That’s a lot of the image to leave behind. I’ll have to work on that. I’m not currently doing anything to detect the vertical or horizontal sync pulses, but I’ve done that before with my NOAA weather satellite code, so I don’t anticipate any real problems. All in all, not a bad first try.

Addendum: The large image shows some JPEG compression artifacts since I expanded it from the small JPEG image without trying to save it at high quality. I can do better.

Addendum2: I decreased the overall filter size to length 11. It didn’t work very well, because the original file was at 22050, which pushes makes the portion of the spectrum we are interested in down into the region of the filter reponse which isn’t very uniform. But if we resample the sound file down to 8Khz, we filter works just fine with just 11 taps. Check it out:

The image is slightly skewed because the line length is no longer an integer number of samples, and the accumulated error causes the sync to drift slightly. A real implementation would track this accurately.

Classic Black and White SSTV timings?

I was trying to determine the exact timings for the classic “8 second” black and white SSTV mode. Copthorne MacDonald suggests 15 lines per second, to make 120 lines in 8 seconds. The vertical sync pulse has a duration of 30ms, and the horizontal sync pulse duration is just 5ms. The sync frequency is 1200Hz, and black and white interpolate between 1500Hz and 2300Hz. The aspect ratio is 1:1 (to match the usual oscilloscope tubes of the day).

So, I wrote a program to generate my best guess from this basic description, and here’s an example sound file:

An 8 second “classic” SSTV sound file, recorded as WAV File (22050Hz)

Here’s a spectrogram of the first 0.5 seconds or so, showing the vertical sync pulse, followed by a few scanlines:

I decided that the horizontal sync pulse should go in the final 5ms of each scanline (somewhat arbitrarilly).

So, the question of the hour is: can any modern software decode this basic image? I’ll be working on my own implementation, but I’m curious. I’ll leave it as a bit of a puzzle: if someone emails me the decoded image, you’ll be immortalized in these pages and have my thanks. Otherwise, you’ll just have to wait to see what the picture actually looks like in a day or so.

“Classic” Black & White SSTV…

I haven’t had much time for actual experimentation, but I have spent some time researching and some more time thinking about how to properly implement and test analog SSTV modulators and demodulators. I haven’t made much actual progress, but I thought I’d document some of the information that I’ve discovered and my current thoughts.

First of all, I began by trying to discover a bit of the history. Luckily, I’m an ARRL member, so I have access to all of the back issues of QST magazine. In this case, one only has to search for the author “Copthorne MacDonald” to find the 1958-1961 papers on the first amateur uses of slow scan television. I’m still perusing them a bit, but in the first paper, MacDonald suggested using a sub-carrier amplitude modulated signal which would be incompatible with virtually all SSTV modes used today, but in the 1961 he proposed the “classic” black and white, 8 second mode, consisting of 120 lines, frequency modulating between 1500 and 2300 Hz. These numbers were apparently chosen mostly for compatibility with existing telephone based fax standards of the day, but also nicely fit within the reasonably uniform region of most voice based transceivers in use for amateur radio.

This kind of “classic” black and white SSTV was apparently fairly common during the 1970s.

Here is some classic SSTV, re-rendered from a cassette recording done in 1980, documenting a QSO between KG4I in Birmingham, AL and N2BJW in Elmira, NY.

Here’s a clever way to say “Happy Trails” from AC5D:

Here’s a more modern implementation, which perhaps demonstrates a wider variety of imagery (a lot of it supermodels), still in the classic 120×120 black and white format:

Why bother with this simple mode? Well, just because it is simple. I also like the speed: the immediacy of the mode is kind of cool. I’m working on good, solid, easy to understand code to do this mode, and will post about my experiments in the presence of noise when I get some results.

Experiments with SSTV in the presence of noise…

Last night while watching television, I decided to code up an SSTV modulator for the most popular US mode, which is reportedly Scottie 1. I had done most of the heavy lifting when I created a Robot36 encoder a few years ago, so I mostly cribbed the meat out of that code, and modified it to do the details of Scottie 1, which (since it is in RGB space rather than YUV) is rather simpler than the original code. Once I had that working, I decided to do some experiments using Multiscan, an SSTV decoder that I had on my Macbook Pro.

First, I needed some source material. I chose this picture of myself in front of the Tower of London, for no other reason than simple expediency. It does however have some nice features. It contains a faily uniform sky background, and then the tower itself is highly detailed, but with fairly low contrast. Here is the original, which was downsampled to 320×240, padded with 16 lines of gray, and then upsampled to an easier to view 640×512. Consider this the “ground truth” image. (Click on the image to view it full resolution)

Using my modulator code, I then encoded it in Scottie 1 mode. I then used a loop back cable to play it back into the line input on my Macbook, and fed it to Multiscan, the better of the two programs that I know to do SSTV on the Mac. Here’s the resulting image:

Not exactly stellar. In fact, downright blurry. I wasn’t too pleased with the quality of this. My suspicion is that MMSSTV will do a better job, but I left my Windows laptop at work, so I can’t really be sure. It’s also possible (although I think somewhat unlikely) that my modulator is the source of the blurriness.

I also made a version of the audio file with some fairly strong additive white Gaussian noise added.

Multiscan wasn’t able to get the sync properly at this signal to noise ratio. I find that a little surprising, it seems to me rather straightforward to solve this problem. You can also see that the overall quality is quite a bit lower.

So, I’m curious: do any other SSTV decoders do better? In the interest of trying to find the best performing decoders, I will put links to the uncompressed 11025Hz .WAV files which I used for these tests. If you try them out with your favorite SSTV decoder, feel free to drop me an email with the decoded images (or links to them on flickr or picasa or whatever) along with details of what software you used, and whatever special operations or processing you did. If I get any good response, I will summarize in a future posting.

Scottie 1 SSTV, WAV format, 11025Hz sample rate, no noise.
Scottie 1 SSTV, WAV format, 11025Hz sample rate, substantial additive Gaussian white noise.

Thanks for any response! In the future, I’d like to do a more systematic evaluation of analog SSTV modulators, demodulators, and modes.

Addendum: For my reference on the implementation of modes, I used JL Barber’s excellent paper on SSTV modes. In his description of Scottie 1, he says that the first scanline has an anomalous sync pulse (9ms of 1200 Hertz). When I tried that with Multiscan, it never got started: it seemed to have some problem. When I deleted that sync pulse, everything starts as expected. If anyone tries this experiment, I’d like to hear of any failures to start. Also, if anyone knows of any definitive Scottie 1 examples on the web, let me know.