Success: Robot36 encoder works…

Picture 1

Well, it works! Using the information in the paper I linked earlier in the day, I spent some time and managed to code up a Robot36 SSTV encoder. Above, you see the image decoded by Multiscan on my Macbook. Here’s a link to the .wav file:

Author Mark VandeWetteringPosted on Categories Amateur Radio

11 thoughts on “Success: Robot36 encoder works…”

  1. Cool stuff, I’m looking to build an AVR-based board to do SSTV (or maybe one of those cheap STM32 boards). BTW, your download link is broken.

  2. Hello Mark,

    That’s a great experiment, I would like to try it. Can you send me the robot36.c file? The link is broken on your site. Thank you! Have you done further experiments with SSTV on arduino?

    Bernard Van Haecke KI6TSF

  3. Mark,

    I’m using a serial camera and an arduino to pipe image data over a VHF radio, all non-standard. I’d like to give your code a try because SSTV would let others get in on the image receiving fun.


    — Greg

  4. Hi,

    I found your website and it very useful for my project. Thank you very much for this resource.

    I have some problem to ask If you can help, I found that time in image scan line is shorter time than real audio frequency such as .25 ms or 250 us per pixel, How to modulate audio 1500hz in .25ms ? 1 cycle of 1500 hz need 666.6 us or 0.6666 ms.

    Thank you in advance if you can help explain me this question.


  5. I’m not 100% certain I understand your question. The scanline time for robot36 is 88ms for the Y data. There are nominally 320 pixels per scanline, which means that each pixel takes up only .275ms. As you pointed out, this is not a full cycle of 1500Hz (it won’t be a full cycle of any frequency less than 3.6Khz), but we don’t _have_ to output a full cycle. The code in ScanlinePair just makes sure we end up with the right number of samples, and makes sure (in a fairly crude and unoptimal way) that each pixel in the image makes a contribution to some samples in the output. The result isn’t easy to explain without more math and signal processing than I feel comfortable with, particularly in a comment, but it works out fine for slowly varying signals, but has more difficulty with lots of high frequency changes. A more detailed analysis of the mode would be fun to do. I might give it a try sometime.

    One way to think about the demodulator is to imagine that you are looking at a window of samples around your sampling time. The question you want to answer is “what frequency do I think he was trying to emit _at this moment_?” If your window is wide (you have lots of samples) and you have a very slowly varying (or constant) signal, you can probably do a really good job. But the signal is varying slowly, so the signal doesn’t provide a lot of resolution. If you kept that wide window, you’d have information from other nearby pixels. In most realistic images, the values nearby are fairly correlated with the pixel you are looking at, but as they get further and further away, the information you gain is less and less useful. At a certain point, you don’t get any extra use from having those extra samples.

    You can actually make a demodulator which attempts to just look at windows that contain two samples. (You can think of this as taking your complex input signal, and solve for the angle that rotates the first one to the next, accumulating the frequency). In fact, my first demodulators did precisely this. The result can be noisy and can alias, but it works surprisingly well. In each case, you are estimating the frequency just on the basis of two adjacent samples. Not ideal, but it works…

  6. Where did the robot36.c code go? I would love to use this as starting place for a field day project.

Comments are closed.