I think Bill and Pete have been having way too much fun with the radio projects centered around the Arduino and the SI5351, so I decided to join them and ordered one of Adafruit's SI5351 boards (I still have the kit from Jason's Kickstarter which will almost certainly be better once I get up the nerve to do a little surface mount soldering). At the same time, I noticed that Adafruit had the new quad-core Raspberry Pi 2 boards in stock. It's likely that my hummingbird cam may be resurrected onto this board to give me a little extra CPU oomph.
Oh, and the other items? I like to have plastic dinosaurs in my office, and the baseball was a ball I caught during an (otherwise completely forgettable) Oakland Athletics game.
I was informed by email that Pete was unable to achieve the same minor level of success that I had following my directions on how to get the Arduino 1.6.3 environment working with the I2C LCD display. For now, Pete seems content to use the 1.0.x versions, which I suppose is okay, but maybe we will revisit this sometime in the future. In the meantime Bill has had greater success in getting his Si5351 board working as a VFO/BFO, and has it mounted on some copper clad. Looks very nice. I should do a project like this.
Anywho... today's aquisitions will likely show up in a future post/video. Stay tuned.
Tonight's tinkering was inspired by the script by spikedrba that I mentioned in yesterday's post. I took down the hummingbird camera for a little maintenance, and while it was down decided to do some bench testing with new ideas inspired by what I read.
Sadly, I didn't have anything as photogenic as hummingbirds to stare at, so instead I just pointed it me in my slightly darkened living room as I hacked on the couch. The video is incredibly boring, but I will post a single frame:
First of all, I've added a text annotation with the time and date to every frame. In my hummingbird camera application, it's not clear to me that I want it overlaying every frame, but it's probably useful in a variety of security applications, so I thought it was worth trying. On the line below, you can see three numbers, which represent the load averaged over one, five and ten minutes, followed by two numbers. The first is the number of non zero-length motion vectors that the camera returns, and the second is the sum of the absolute value of differences between adjacent frames. Currently, this application was recording 1280x720 video at 25 fps, and you can see it was using around 36% of the available cpu. Not bad at all. While this version of the script doesn't actually trigger motion detection recording, it is probably doing virtually all the work that such a script would do, so it's pretty clear that my stock, non-overclocked model B can easily keep up at this frame rate and resolution.
Spikedrba's script was very instrumental in figuring out how to setup the pipeline properly to handle this. I also spent some time reading more of the discussion on the picamera github page, and reading the code for the module itself. I'm really very impressed by this.
Once I tighten this up a bit more, I'll be posting a new revision.
My goal in experimenting with the Raspberry Pi camera was to try to make an efficient and effective camera which can detect motion. Previous incarnations of the camera script merely looked at the differences in pixel values between adjacent frames, thresholded them at some value, and then counted the number of pixels which exceeded this value. What I discovered was that it was pretty hard to tune the two threshold values in a way that would not pick up changes due to wind motion of the grassy background.
But it turns out that the Raspberry Pi Camera and its associated software picamera has some other tricks up their sleeves. In addition to recording the h264 encoded video, you can record an alternative stream which contains "motion data", which is essentially some of the raw data that is used by the h264 to do motion coding. Essentially this data provides 4 bytes of data for each 16x16 image block: two signed 8 bit image displacements (in x and y) which represents the estimated image velocity, and a 16 bit value which is the sum of the absolute difference of all the pixels in the block from the previous frame. Both would be rather expensive to compute (certainly in Python) but are quick and easy to extract when computed by the camera itself.
To test my understanding, I modified my camera script to acquire this data, and then transferred it along with the normal video, and then hacked together some scripts using python and gnuplot to superimpose this data atop the background video (which I've faded a bit to make the data more legible). The black contours represent the difference data, and are spaced at intervals of 100. The red vectors represent the motion data plotted atop the image.
One thing leaps out at me immediately: the motion data is very good at finding the hummingbirds, even when the birds are relatively stationary. While this clip was not taken in particularly high wind, it's pretty clear that those vectors aren't very large in the case of plant motion. Hence, it seems clear I could make a better motion detector by taking advantage of the precomputed motion vectors.
A couple of things remain though: there are obviously drop outs where the contour data drops out entirely. I'm not sure what that is about: it could be a bug in my conversion script, or something more insidious. I'll go back to the data and find out. Secondly, I'm not sure how capturing this motion data interacts with another feature I use of the picamera: it's ability to record into circular memory buffers. When I figure out these two issues, I'll post (and likely github) another version of my watcher script.
Hope this is of interest to someone out there.
Addendum: While doing more reading on the picamera github site, I found a link to this awesome script, which points out a lot of clever things that can be done. I'll be swiping ideas from it soon!
Last week I was playing around with using "motion-mmal" to capture pictures of hummingbirds feeding at my feeder. That was fun, but if I wanted to get high resolution pictures, I could not get very high frame rates (maybe 2-5 fps at best). I thought that perhaps by writing my own capture application in C, perhaps I could do better. After all, the graphics processor in the Pi is capable of recording HD video and directly encode it as H264 video. There should be some way to use that hardware effectively, right?
As it turns out, there is.
As a tease, here is some of the video I captured yesterday:
It's recorded at 1280x720 and 25fps (more on that later). It takes about 20% of the cpu available on one of my older Model B Raspberry Pi. The motion detection is done on the camera entirely in Python, and is a bit crufty, but works well enough to get some good video.
Warning: this code is presented as-is. If you aren't a python programmer, you may not have the skills necessary to understand or use this code, but it is a good basic outline that spells out most of the parts you need. Feel free to adapt the code to your needs. If you redistribute it, it would be nice if you could give a nod to this code and my blog in some fashion, but I'm not going to be insulted if you don't. And if you have any improvements, I'd love to hear about them.
#!/usr/bin/env python # __ __ # _ _____ _/ /_____/ / ___ ____ # | |/|/ / _ `/ __/ __/ _ \/ -_) __/ # |__,__/\_,_/\__/\__/_//_/\__/_/ # # import numpy as np import io import os import os.path import fractions import time import random import picamera import picamera.array import datetime as dt import warnings import platform from pkg_resources import require import subprocess print platform.platform() print "Using picamera version", require('picamera').version #warnings.filterwarnings('default', category=DeprecationWarning) prev_image = None image = None def detect_motion(camera): global image, prev_image with picamera.array.PiYUVArray(camera, size=(256,144)) as stream: camera.capture(stream, format='yuv', use_video_port=True, resize=(256,144)) #print "%dx%d:%d image" % (stream.array.shape, stream.array.shape, stream.array.shape) if prev_image is None: prev_image = stream.array.reshape([256*144, 3])[:,0] return False else: image = stream.array.reshape([256*144, 3])[:,0] diff = np.abs(prev_image.astype(float)-image.astype(float)) diff = diff # print diff.shape prev_image = image return diff.shape > 200 def write_video(stream, fname): # Write the entire content of the circular buffer to disk. No need to # lock the stream here as we're definitely not writing to it # simultaneously with io.open(fname, 'wb') as output: for frame in stream.frames: if frame.frame_type == picamera.PiVideoFrameType.sps_header: stream.seek(frame.position) break while True: buf = stream.read1() if not buf: break output.write(buf) # Wipe the circular stream once we're done stream.seek(0) stream.truncate() with picamera.PiCamera(framerate=fractions.Fraction('30/1')) as camera: dir = "/var/tmp/capture" camera.resolution = (1280, 720) camera.framerate = fractions.Fraction('30/1') camera.vflip = True camera.hflip = True camera.start_preview() seconds = 5 stream = picamera.PiCameraCircularIO(camera,seconds=seconds, bitrate=8000000) print "[ Buffer %s seconds/%d bytes ]" % (seconds, stream.size) camera.start_recording(stream, format='h264', bitrate=8000000) try: while True: camera.wait_recording(1) if detect_motion(camera): print "Dumping." # generate a filename... base = 'cam_'+dt.datetime.now().strftime("%H%M%S") part1 = os.path.join(dir, base+"-A.h264") part2 = os.path.join(dir, base+"-B.h264") camera.split_recording(part2) write_video(stream, part1) camera.wait_recording(15) while detect_motion(camera): camera.wait_recording(1) camera.split_recording(stream) with open("files.txt", "a") as f: f.write("file %s\n" % part1) f.write("file %s\n" % part2) print "Dumped %s %s" % (part1, part2) # Copy files to remote server dst = 'firstname.lastname@example.org:capture' print "Copying %s to %s..." % (part1, dst) rc = subprocess.check_call(['scp', '-p', '-q', part1, dst]) if rc != 0: print "PROBLEM: (rc = %d)" % rc else: os.unlink(part1) print "Copying %s to %s..." % (part2, dst) rc = subprocess.check_call(['scp', '-p', '-q', part2, dst]) if rc != 0: print "PROBLEM: (rc = %d)" % rc else: os.unlink(part2) # ready to record some more... camera.wait_recording(seconds) finally: camera.stop_recording()
This would not be possible without the awesome picamera Python module and lots of careful engineering by the Raspberry Pi + Camera designers. They clearly foresaw this kind of possible application, and did everything that they needed to make it run efficiently and reasonably.
A few more short notes:
- The motion detection code is terrible. It works after a fashion, but clearly could be tuned better.
- To save space on my Pi, after capture it uploads each video file to one of my local servers, and then delete the file. I hardcoded it to use scp via subprocess. If you want to do something else, you can figure out what that might be and do it there. It won't record new video while the scp is occurring: you could spawn a thread or some such to handle the copy and then dump back to the loop if you like.
- You might want to write to a tmpfs file space, so it doesn't eventually wear out your flash card with repeated writes and deletes, particularly if you can transmit these video files off as they are generated.
- The picamera documentation is quite helpful. Indeed, it was my reading of that documentation which formed the basis of this initial script, which likely could not have been done (or not as easily) without them.
I will probably produce a tidier, better annotated version of this code and put it on github soon.
Hope this is of interest to some of you.
Addendum: If you want to see what the hardware looks like, you can see it here. Really just a cardboard box holding a pi, a powered hub, and the pi camera taped to the top, hung in the window.
When I got home today, it appeared that I had a few more images from my hummingbird cam. Luckily, I got several nice frames of
him her (probably), so I put them together into an animated GIF. Nifty. When I get a chance to do more stuff on it this weekend, I hope to get even better recordings, and setup the camera closer. Till then, it's been a fun project.
Addendum: I converted the individual JPEG images into an animated GIF using the ImageMagick toolkit on Linux. The command was:
convert -crop 512x576+512x576 +repage -loop 0 -delay 200 frames.*.jpg loop.gif
The -delay value probably could have been shrunk a bit. I used the -crop option to just select the right half of the screen. When you post animated gifs in WordPress, you need to post them at full size, the shrunken versions won't animate.
One thing I didn't realize when I setup the Raspberry Pi camera to monitor my humming bird feeder was that it has a bright red led which turns on when the camera is enabled. In most cases, this light isn't a big deal, but I am pointing it out the window, so the reflection of it is kind of annoying. You can see it in my earlier pictures, but I didn't realize what it was until later, when I got this picture:
Luckily, it's easy to disable. Just edit the file /boot/config.txt, and add a line which looks like:
and then reboot. Voila. All is fixed. This might be handy if you wanted to setup a stealth security camera as well. Just noted here, so I don't forget.
I like to see hummingbirds. They seem completely incredible to me, like little hyper Swiss Watches, buzzing around the garden. I've seen a few of them in our yard, but I've been meaning to encourage them showing up more, so I recently installed a feeder. While the syrup level has gone down, I have never caught one actually feeding.
Of course, the way to solve this problem (as is true of all problems) is to apply technology! So, I quickly hacked together a little prototype camera that I could hang on my window and stare out at feeder. I've been thinking of doing this all week, and gotten proofing. In the end, I wasn't getting anywhere, so this morning I decided on what I thought was the simplest possible prototype. I took one of my old Raspberry Pi's, a DLINK powered USB hub to power it, and a Raspberry Pi camera, and taped them all in the cardboard box for an old laptop hard drive. My original idea was simply to duct tape it to the window facing the bird feeder, but a quick trial of this wasn't promising: the duct tape kept relaxing, and the box was slowly drifting down. So, instead I got some wire and made a little hanger that would attach to two paperclips that I jammed in each side. It isn't the greatest way to go, but it works.
I hung it from the top of the blinds in the kitchen, aimed it only very coarsely, and took my first picture by logging in and running raspistill. The aim isn't very good, I only got the feeder at the edge, but this is just a prototype: if I start getting pictures of hummingbirds, I will refine the design/aim, and probably mount the camera outside so I can be a bit closer.
Of course all you see are the drought stricken side of my hill, but if I get any pictures of birds, I'll be posting them soon.
Addendum: I wandered outside as a quick test of the camera. While I am bigger than a hummingbird, I didn't have any problem tripping the camera. I am concerned that the small, fast motion of a hummingbird may not be sufficient to trip the camera, but we shall see.
A couple of additional words on the software. I use the "motion-mmal" program, a port of the popular "motion" program that is widely used on Linux with conventional webcams, but has been extended to use the Raspberry Pi camera, which are both high quality and inexpensive. The way I have this configured is that it does motion detection on a downresolution version of the video feed, and when it detects motion, it dumps the frame with the biggest difference to a local disk. Eventually, I may try to get it to capture video, and/or upload it automatically to google drive like I have done before for my cat camera. More later.
Addendum2: Huzzah! Within an hour of setting this camera up, I got my first picture! I guess it works. I'll have to get it aimed a bit better and see how often I can catch birds, and maybe try to set it up to do motion capture too, not just stills.
One of my recent posts highlighted the big pile of development boards that I have lying around. This week, I actually added to that pile in a couple of ways: I found a pair of Beagle Bone Blacks that I had misplaced, a couple of Propeller boards, and most significantly, I ordered an ODROID-C1 from ameriDroid. Stupidly, I didn't read their website carefully enough, so I ended up making TWO orders from ameriDroid, the second to get the somewhat odd power supply needed (5V, 2A, with 2.5mm barrel) and to which I added the clear case you see, and also a micro HDMI cable (I know I have one some where, but I couldn't find it). The prices for the additional goodies swell the price a bit, but are quite reasonably priced: $4.95 for the case, $5.95 for the HDMI cable, and $6.95 for the power adapter. Consider carefully before ordering and you'll save a round of shipping.
ameriDroid did an excellent job of shipping: I had BOTH orders delivered just two days after ordering. They even included this nice hand written thank you, which makes more sense when you realize I had this delivered to my work address.
— Mark VandeWettering (@brainwagon) March 11, 2015
In the following discussion, when I mention the Raspberry Pi, I am speaking of the older variation model B and the B+. I do not yet have a Raspberry Pi 2, which upgrades to a 900Mhz quad core with 1GB of DRAM.
Given that I have four Raspberry Pis and three Beagle Bone Blacks of various generations, what compelled me to look at the ODROID-C1? You can read the specifications yourself, but here are the things that were most intriguing to me:
- Quad core 1.5Ghz ARM processor. Compared to the 700Mhz single core ARM in the Raspberry Pi and the 1Ghz CPU in the Beagle Bone Black, one might expect that this little board could handle a lot more stuff.
- 1GB of Dram, double most of my other boards. Nice!
- Supports some little eMMC4.5 flash boards, which are supposed to be faster than existing microSD cards (more on this below).
- 4 USB ports + 1 USB OTG port. Lots of expansion capabilities.
- Includes an infrared receiver built onto the board. Might be cool for remote/home theater applications.
- Supports both Ubuntu and Android. I'm mostly a Linux guy, but the possibility of using recent Android builds is interesting too.
Okay, so on to my experience...
I didn't order any of the memory cards from ameriDroid with the operating systems pre installed. Why? I'm kind of a cheapskate, and I have a couple of spare 16GB microSD cards lying around. I started with a class 10 Lexar card. From my Ubuntu laptop, I downloaded their version of Ubuntu (1.1GB compressed, around 4gb uncompressed) and did the usual dance using the Linux "dd" program to copy it to the flash card. I also got an Edimax Wifi dongle from one of my other Raspberry Pis, and the wireless keyboard dongle. Plugged all this stuff together, plugged the microHDMI cable into my old Samsung TV, and powered it on. And...
Nothing. Screen went black on the TV, and the two LEDs on the board (red and blue) were steady and mixing to purple color. Hmmph.
A little googling revealed that if Linux had booted, it would be flashing the blue led as a heart beat indication. I decided to go ahead and try reburning Linux onto my other flash card (which it turns out is a slower class 6 Lexar card). After all, earlier this week I discovered that one of my unbootable Raspberry Pis was in fact an issue with SD card compatibility.
And, of course... that worked! Up until a point. My TV is rather old, and just supports 720P. When it booted, I ended up with my tv saying "Video Mode Not Supported". Grrr. It turns out that you can change that by modifying the boot.ini file on the card (easy to edit if you have another Linux box, mount the card, edit the file to select 720p, save, eject, and reboot).
And that worked. Again, up until a point. On my TV, overscan is a bit of an issue: a significant amount of the screen (including all of the all-important task bar) was actually off screen on my TV. Grr... I drug out a monitor which didn't have the overscan issues. And rebooted.
Into a nice X-windows desktop. It wasn't the Unity layout that I was familiar with from my desktop, it's more old school. On the desktop is a README and an icon labelled "ODROID Utility". You click on it, and it allows you to do some features similar to those performed by the "raspi-config" program on the Raspberry Pi: most notably, to upgrade the kernel/firmware and expand the root partition to take full advantage of the entire microSD card. If you select the "upgrade kernel", it actually doesn't do that, it tells you that you can use the normal "sudo apt-get update; sudo apt-get upgrade; sudo apt-get dist-upgrade" commands to update. But I did try to expand the drive, rebooted, setup the wireless network using the desktop utility, and then started the apt-get stuff...
But something along here went wrong. Even after rebooting, it didn't appear that the card was expanded, but I didn't notice until the upgrade was underway. There were a couple of other oddities: ssh didn't appear to be working right, I couldn't login remotely. And the Edimax Wifi was really, really slow: just a few kb per second. That upgrade was going to take forever. And while that was happening, I noticed the odd "unexpanded" root partition, which appeared to be out of space. Argh!
So, I redid the entire process again: reflashed the OS, and redid everything again. I also decided to ditch the Edimax connector, and instead plugged the board into my wireless router via Ethernet.
And somehow, things worked better. I'm not sure what I did wrong, but when I tried to expand the root FS, it told me to check to make sure that the root device was on /dev/mmcblk0p2. I exited first, and ran df to check, and it told me that it couldn't access the mount table. "What the heck?" I decided to reboot again, and it showed up properly, not sure why. In any case, I expanded the root fs and rebooted. This time, I saw 11GB free, and decided to proceed with the apt-get upgrades.
Now that I was hooked up via Ethernet, things seemed to work much better. It still took a couple of hours to update all this stuff, but it did, and now it's running pretty well.
If you "cat /proc/cpuinfo", you get:
odroid@r2d2:~$ cat /proc/cpuinfo Processor : ARMv7 Processor rev 1 (v7l) processor : 0 BogoMIPS : 3.27 processor : 1 BogoMIPS : 3.27 processor : 2 BogoMIPS : 3.27 processor : 3 BogoMIPS : 3.27 Features : swp half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x0 CPU part : 0xc05 CPU revision : 1 Hardware : ODROIDC Revision : 000a Serial : 1b00000000000000
Nice! Quad core. It still doesn't seem super fast, no doubt because of the slow flash cards. You can ssh in using the login odroid password odroid. You can run sudo or su with the same password.
It was a bit of a hassle, but it appears to work.
Overall, the biggest problem I have with the ODROID thusfar is the Ubuntu distribution is just too bloated. It loads a whole bunch of software that might be reasonable on a desktop, but seems out of place (at least by default) on a small system. The Raspbian distribution of Debian actually walks this line pretty carefully: it feels fleshed out, but by default doesn't include absolutely everything you might want, because after all, you might not want all this stuff, and resources on these small boards are fairly scarce. I don't think I need the jdk, cups, kido (I had to look it up too), samba, chrome and firefox (runnable, but not all that pleasant in low memory systems) and god knows what else. This also means that getting your system up to date is slow too, because there is just so much software to update. Bleh.
It's also pretty clear that the ODROID distribution is just less polished. The Raspberry Pi might annoy me with its (understandable) insistence on setting your keyboard up for UK English, but it's easy enough to change, and raspi-config handles most of it. Ubuntu on the ODROID seems curiously to come with the default time zone set to Australia/Adelaide, and I had to google for the dpkg-reconfigure magic to fix it. Your expectations and experiences might be different.
One of my twitter followers asked whether I had bought the eMMC card with Ubuntu pre-installed. I did not, and the reason is simple: I'm a cheapshake. I think I paid ~$10 for my last 16gb microSD card, whereas the 16gb eMMC cards sold by ameriDroid cost $40 (more than the entire rest of the computer). Whether they are speedy or not, it didn't seem like economy to me.
A few last thoughts after my first day as an ODROID-C1:
If you are a relative beginner to Linux, I don't think I'd allow myself to be seduced by the ODROID's higher speed. Get yourself a Raspberry Pi 2: definitely setup better for newbies, and has a much larger community to draw from. I found the learning curve for the ODROID to be a bit steeper than I think newbs could handle.
The ODROID-C1 could use a more disciplined Ubuntu distribution. The existing one includes everything and then some. A smaller but more reasoned distribution would be nicer.
I have not figured out what the deal is with the microSD card that wouldn't boot. I am told that Samsung cards are in general better, but more investigation is clearly needed. I've no doubt that the class 6 card I'm using is slow, but the class 10 card I tried didn't work. More experimentation is clearly (but sadly) still needed.
I should experiment with wireless again. I've had good luck with the Edimax dongles on the Pi, not sure what the issue might be.
Buy the AC adapter when you order one. And the HDMI cable if you don't have one.
A lot of the documentation is obviously kind of bad translations. Even their videos can be a little bit mumbly and hard to understand:
Are any of my other readers using the ODROID-C1? I'd love to hear your comments and experiences.
I have an odd obsession with small, relatively cheap hardware development boards. Over the last few years, I've acquired a bunch of them, from Arduino to Raspberry Pi to BeagleBone Black. I thought it might be nice to just do a short video showing what I have around. So I did. Here's a little 25 minute video demoing what I've got lying around.
- Arduino, the classic board. Based upon the ATMEGA328, an 8 bit processor, 32K of flash, 20Mhz. Great user community.
- Beagle Bone Black Very cool Linux based machine.
- Raspberry Pi Perhaps my favorite board, I don't have the version 2 board yet, but the version B+ boards are really nice. I particularly like the Pi Camera boards you can use with them.
- WRTNode A very cool, very cheap Linux box, with WiFi. Runs OpenWRT, a stripped down version of Linux, but still cool.
- Wild Fire board A very nifty little board by Wicked Devices, who supplied me with a couple. During the video, I mentioned that I thought these boards were a bit expensive, but checking their website, I see them selling for about $49. If you need an Arduino compatible board with some extra punch, it's a great little board.
- ESP8266 The tiniest and cheapest board that I had. Often used as simple serial->WiFi chips, they are actually quite powerful and can be reprogrammed. This lua based firmware is a cool example.
I've long suspected that my cat Scrappy had teleportation powers:
Okay, okay, I know he doesn't really. But it was kind of funny.
Just a quick note. I have been meaning to try out the combination of the Raspberry Pi with one of the popular $20 RTL-SDR dongles, to see if the combination would work. I was wondering how well it would work, how hard it would be, how much of the available (small) CPU power it would use. The short answers: reasonably well, pretty easy, and maybe 20% for rtl_fm. That's pretty encouraging. I'll be experimenting with it some more, but here's a short bit of me recording KQED, the bay area PBS FM station, using the pitiful tiny antenna that came with the dongle. It should be noted that my house is in a bit of a valley, and FM reception in general is quite poor, and I recorded this from inside my house, which is stucco and therefore is covered in a metal mesh that doesn't help. Not too bad. I'll work out a better antenna for it, and then try it more seriously.
Nothing too exciting going on, but minor bits of code and play have been done, so I thought I'd update.
First of all, there is a program for decoding SSTV on the Pi, called QSSTV. I don't have a proper sound setup on the Pi yet, so I couldn't test it live on the air, but I did take one of my pregenerated Martin 1 images and asked it to decode, which it did quite well:
Not bad at all. While investigating qsstv's capabilities, I discovered that the latest 8.x versions suppert digital SSTV. Well, except it isn't built into the qsstv version (my guess is that the Pi doesn't have quite enough oomph to do the necessary math in real time). But that's pretty cool: I'll have to check that out sometime soon.
I also coded up a Scotty 1 encoder, so now I have encoders for Martin 1, Scotty 1, Robot36 and Robot72 modes. I found this great book online which had many details about the different modes. It was quite helpful. It actually documents the modes a lot better than the ARRL Image Communications Handbook and is, well, free. Awesome.
One question I've been interested in for a while is "which mode is best?" Of course, we have to define what we mean by "best". After all, Robot36 sends an image in half the time of Robot72, and about one quarter the time as Martin M1. My question was: how much better image can we expect from Martin, given that it takes 4x as long. Another question was "how much bandwidth does each mode use?" In the ARRL Image Communications Handbook, they have a formula which computes bandwidth but it didn't make a great deal of sense to me.
I don't know how to precisely answer either of these, but I thought I'd write some code to simply compute the power spectra of a bunch of some sample sstv recordings. So I did. It basically just loads the sound samples from the SSTV file, window it (I used the Blackman-Nutall window, for no real reason) runs an FFT (using the fftw3 library) and compute the power spectrum. It's pretty easy. I then encoded a simple color bar image in three different modes, and graphed them all up using gnuplot.
Staring at it, well, they don't seem that different really. I should figure out the frequency bounds that (say) cover 85% of the total energy, but just eyeballing it, it doesn't seem that bad.
I also did some minor tweaking to add in additive white Gaussian noise, but I haven't gotten that entirely working so I could do an apple to apples comparison of how each modes does in total power at various levels of noise. And I'm looking for an HF path simulator too.
That's about it for now. Stay tuned for more.
Previously, I had written an encoder for the Robot 36 SSTV mode. I chose this for a simple reason: it appears to be the most common mode used in downlinks from satellites, such as the ARISSat-1. It's not a bad choice, and presents reasonable quality in just 36 seconds.
Today, I decided that I should probably go ahead and implement another of the "Robot" modes, specifically Robot 72. It transmits images with the same resolution (320x240) as Robot 36, but with a bit better quality, and I suspect a bit better fidelity. Both modes transform the RGB colors of the original into a different color space with a luminance channel (usually labeled Y for Ylluminance) and the color encoded in a R-Y and a B-Y channel. To speed transmission, Robot 36 downsamples the last two channels into half resolution images in both dimensions (it really only sends a 160x120 image in those channels). Robot 72 does a similar thing, but only downsamples in the horizontal direction, sending R-Y and B-Y in 160x240.
It wasn't too hard to modify my Robot 36 code to transmit Robot 72. For fun, I set it up and tested it. It works! Sending the resulting file to my Macbook and decoding with Multiscan 3B, I got:
(The image has been expanded by 2, to 640x480, which makes it look a bit soft)
So, anyway, I was thinking about where to take this idea a bit further. I want to create a project that others can duplicate and expand upon, and that maybe promote the SSTV in a way that is amusing and fun. I wanted to build upon the work I've done so far, but take it further, and make it into a project that others might want to duplicate.
What I envision is a small box, consisting of a Raspberry Pi, a Raspberry Pi Camera, and a PiTFT display, together with a USB sound card like this one. (You need a USB sound card because while the Pi does have sound output, it doesn't have sound input.) Add a microphone and a speaker. This collection will be interfaced with a radio: let's assume for the moment an amateur radio like a the little Baofeng BF-888S radio I've been playing with. Add some buttons for interface.
Here's what I'm imagining as the use case: it's an interface to your HT. You could talk, and have it relayed to the radio. You could listen to the radio through the speaker. But you can also click a different button, and it will capture and send an image via SSTV. And if it hears an SSTV image, it will decode it and display it on the TFT display. I'll probably initially support some of the low resolution black and white modes as well as the Robot 36 and Robot72 modes. I can also imagine a keyboard interface that will allow you to add text to your live images and send it out as well. The fastest, lowest resolution BW modes are just 160x120, and transmit in just 8 seconds. With an 8x8 character matrix, you can send the equivalent of a tweet (about 120 characters) in one image.
To make this work, I'll have to work on a demodulator. So that's the next step. Stay tuned.