Yesterday’s video showed a very fussy version of Tetsuo Kogawa’s 1 transistor FM transmitter, which worked after a fashion, but which seemed really squirrely. Almost any motion of anything caused the circuit to behave rather badly as capacitance changed, and I picked up a considerable amount of hum. Today, I rebuilt the circuit onto a piece of one sided copper clad PCB material, and it worked much better. Hardly any hum, and much less finicky. I didn’t even try to add a clip lead: what you see below is the circuit just operating with whatever signal radiates from the PCB.
I’m still getting multiple copies of the output across the FM broadcast dial, so I am not sure that it’s really that great of a circuit, and I’d be terrified of trying to amplify this and send it over a greater distance lest the FCC come hunting me down, but it at least works, and wasn’t very hard to debug, once I got the difference in pin layout for the 2N3904 sorted out and redid the layout a bit.
I’ve received two requests for information about my “video production pipeline”, such as it is. As you can tell by my videos, I am shooting with pretty ugly hardware, in a pretty ugly way, with minimal (read “no”) editing. But I did figure out a pretty nice way to add some watermarks and overlays to my videos using open source tools like ffmpeg, and thought that it might be worth documenting here (at least so I don’t forget how to do it myself).
First of all, I’m using an old Kodak Zi6 that I got for a good price off woot.com. It shoots at 1280x720p, which is a nominally a widescreen HD format. But since ultimately I am going to target YouTube, and because the video quality isn’t all that astounding anyway, I have chosen in all my most recent videos to target a 480 line format, which (assuming 16:9) aspect ratio means that I need tor resize my videos down to 854×480. The Zi6 saves in a Quicktime container format, using h264 video and AAC audio at 8Mbps and 128kbps respectively.
For general mucking around with video, I like to use my favorite Swiss army knife: ffmpeg. It reads and writes a ton of formats, and has a nifty set of features that help in titling. You can try installing it from whatever binary repository you like, but I usually find out that I need to rebuild it to include some option that the makers of the binary repository didn’t think to add. Luckily, it’s not really that hard to build: you can follow the instructions to get a current source tree, and then it’s simply a matter of building it with the needed options. If you run ffmpeg by itself, it will tell you what the configuration options it used for compilation. For my own compile, I used these options:
I enabled libvpx, libvorbis and libtheora for experimenting with webm and related codecs. I added libx264 and libfaac so I could do MPEG4, mp3lame so I could encode to mp3 format audio, most important for this example, libfreetype so it would build video filters that could overlay text onto my video. If you compile ffmpeg with these options, you should be compatible with what I am doing here.
It wouldn’t be hard to just type a quick command line to resize and re-encode the video, but I’m after something a bit more complicated here. My pipeline resizes, removes noise, does a fade in, and then adds text over the bottom of the screen that contains the video title, my contact information, and a Creative Commons banner so that people know how they can reuse the video. To do this, I need to make use of the libavfilter features of ffmpeg. Without further ado, here’s the command line I used for a recent video:
So, what does all this do? Well, walking through the command: -y says to go ahead and overwrite the output file. -i specifies the input file to be the raw footage that I transferred from my Zi6. I specified -sameq to keep the quality level the same for the output:: you might want to specify an audio and video bitrate separately here, but I figure retaining the original quality for upload to YouTube is a good thing. I am shooting 16:9, so I specify the aspect ratio with the next argument.
Then comes the real magic: the -vf argument specifies a rather long command string. You can think of it as a series of chains, separated by semicolons. Each command in the chain is separated by commas. Inputs and outputs are specified by names appearing inside square brackets. Read the rather terse and difficult documentation if you want to understand more, but it’s not too hard to walk through what the chains do. From ‘scale” to the first semicolon, the input video (implicit input to the filter chain) we resize the video to the desired output size, fade in from black over the first 30 frames, and then run the high quality 3d denoiser, storing the result in register xxx. The next command creates a semi-transparent background color card which is 64 pixels high and the full width of the video, storing it in y. The next command takes the resized video xxx, and the color card yyy, and overlays the color at the bottom. We could store that in a new register, but instead we simply chain on a drawtext command. It specifies an x, y, and fontfile, as well as a file “hdr” which contains the text that we want to insert. For this video, that file looks like:
The (too simple) Micro FM transmitter on a breadboard
https://brainwagon.org | @brainwagon | mailto:brainwagon@gmail.com
The command then stores the output back in the register [xxx]. The next command reads a simple png file of the Creative Commons license and makes it available as a movie in register yyy. In this case, it’s just a simple .png, but you could use an animated file if you’d rather. The last command then takes xxx and yyy and overlays them so that the copyright appears in the right place.
And that’s it! To process my video, I just download from the camera to my linux box, change the title information in the file “hdr”, and then run the command. When it is done, I’m ready to upload the file to YouTube.
A couple of improvements I have yet to add to my usual pipeline: I don’t really like the edge of the transparent color block: it would be nicer to use a gradient. I couldn’t figure out how to synthesize one in ffmpeg, but it isn’t hard if you have something like the netpbm utilities:
Running this command builds a file called ‘mix.png’, which you can read in using the movie node in the avfilter chain, just as I did for the Creative Commons logo. Here’s an example:
If I were a real genius, I’d merge this whole thing into a little python script that could also manage the uploading to YouTube.
If this is any help to you, or you have refinements to the basic idea that you’d like to share, be sure to add your comments or links below!
A couple of days ago, I mentioned Tetsuo Kogawa’s MicroFM transmitter, a simple one transistor FM radio transmitter. Tonight, I decided to put it together on an experimenter’s breadboard. I didn’t have the 2SC2001 transistor that Tetsuo Kogawa used, so I just dusted off one of my $.10 2N3904 transistors, and dug the rest of the components out of my junk box. I assembled it in the worst way imaginable, with no real attention to lead lengths (I left them all uncut) and fed with unshielded cable. It “worked”, after a fashion at least, but I counted four images of the transmitted signal up and down the FM broadcast band.
I suspect if I built this properly on some copper clad with short line lengths, it would work better, but I suspect that it still would be rather horrible on spectral purity. As such, it’s worth experimenting with, but I wouldn’t try to build something this simple and try to get range beyond my desktop.
I’ve been interested in LOWFER radio (low frequency radio operation) for quite some time. Under Part 15, unlicensed experimenters can transmit signals in the frequency band between 160khz and 190khz, subject to certain regulations on power and antennas. You can read more about it here.
I was bored the other day, so I decided to breadboard the oscillator section of K0LR’s Simple LOWFER Transmitter. It’s basically a crystal oscillator that is tied to a 74HC4060 Oscillator/Divider. To transmit on LOWFER frequencies, you might use a 6Mhz crystal and divide it by 32, creating an output around 187khz. I didn’t have a 6Mhz crystal (or really any of the other right components), but I had close values, so I put the circuit together with what I had.
And saw lots of noise in the output waveform. The high and low values were probably wavering by several tens of millivolts. I then remembered that I hadn’t installed a decoupling capacitor. Since I hadn’t seen anyone demonstrate the effect of decoupling capacitors, I thought it was interesting enough to tack together a quick video.
In thinking about the 555 timer AM transmitter that I constructed last night and trying to understand how it might work, I eventually ended up with a basic question about PWM modulation. It boiled down to this: if you are generating a pulse width modulation signal with a rate of (say 540khz) but pulses whose duty cycle varies from 0 to 100%, how does this implement AM modulation?
If we consider the rectangular pulse centered inside an interval of running from -1 to 1, then a pulse with a duty cycle of D percent runs from -D/100 to +D/100. (From now on, we’ll find it convenient to express D as a fraction and not as a percent). We can use Fourier analysis to decompose this square wave into a series of sines and cosines at multiples of the base rate. For our application, we can ignore the DC component (it’ll be trimmed off by a DC blocking cap anyway) and we can assume that all higher multiples of the carrier frequency will be low pass filtered. The only thing we really need to look at is the component right at the carrier frequency.
We can do this analytically without too much trouble. To compute the Fourier coefficient, we compute 1/L * integral(cos(n * pi * t / L), dt, -D, D). (Sorry, WordPress isn’t all that good at this, and I wasn’t able to get MathJax to work). If we think of the the complete cycle as going from -1 to 1, then L = 1, and we can work this out: the amplitude of the carrier turns out to be 2.0 * sin(π * D) / π. We can make a graph, showing what the amplitude of the sine wave at the carrier frequency will be for varying duty cycles.
What does this mean? If we shift the duty cycle of our PWM waveform, we actually are modifying the amplitude (and therefore the power) of the transmitter output at the carrier frequency. As we deviate more from 0.5, we get more and more energy in the higher harmonics of the carrier frequency.
I’m sure that was about as opaque an explanation as possible, but it suggests to me a simple software simulation that I might code up this weekend to test my understanding.
Stay tuned.
Addendum: We can work out the relative amplitudes of the first three multiples of the carrier frequency:
I mostly avoided the siren song of the 555 timer that seemed to echo through the blogiverse during the recent 555 contest, but when I was out and about last weekend, I picked up 10 of them from Anchor Electronics, and they have been taunting me from the shelf ever since. So, last night I dug out some resistors and caps, and tossed together a simple multivibrator circuit. Today, I was pondering what I could do with it, and I recalled seeing the circuit being used as an AM transmitter. The basic idea is to simply AC couple the audio onto pin 5, and voila. So… that’s what I did!
I’m actually not quite sure I completely understand how this works (it’s not entirely clear to me whether it is more accurate to call this pulse width modulation or frequency modulation) but the circuit does work. I imagine that once I understand it better, I’ll be able to make it work significantly better. But that’s something for the future.
Back when I was into building telescopes (something I haven’t done very much of in the last few years) I developed a desire to try some machining. I managed to pick up a 6″ Atlas mini lathe. And… well.. I’ve done very little. It’s sitting on my workbench in the garage. This website demonstrates some cool projects that can be done with inexpensive Taig mills and lathes.
Roger, G3XBM, has been busy experimenting on the Dreamer’s Band: signals somewhere around 8.9khz. These signals are actually in the audio range: so all you need to receive them are an antenna (Roger uses a largish loop antenna) and an RF preamplifier, feeding into a soundcard. Roger demos his system here, and shows reception of the Russian Alpha Beacon station operating around 12khz.
I’ve been working on a script or two for generating intros for some of my little YouTube videos, and thought that maybe something like an animation of the Lorenz strange attractor might make a somewhat interesting background. A little tweaking, and I produced the following example (only 10 seconds long, and with some Morse as the background):
You’ll probably see this (or something similar) on front of future videos.
Antenna design and manufacture has historically been pretty, well, primitive. There are reasons for this: early on, solving the large systems of equations to adequately model complex antennas was difficult. Luckily, advances in computers and software make that much more tractable. Now, we can design antennas with dozens or even hundreds of elements, and in complex three dimensional shapes. But manufacturing such antennas has been difficult. Hence, my interest in this article:
According to Bernhard, these antennas are electrically small relative to a wavelength (typically a twelfth of a wavelength or less) and exhibit performance metrics that are an order of magnitude better than those realized by monopole antenna designs.
Haven’t had time to rundown the original article yet, but it seems really intriguing. More later.
A bit more digging on yesterday’s topic (crystal microphones) yielded this book, published by the U.S. Army, entitled CW and AM transmitters and receivers which included some additional useful information regarding the construction of crystal microphones.
Carmen and I just got back from a trip to London, and we had a blast. One of the geekiest things we did while there was to take a day trip by train out to Bletchley Park to see the site of the codebreaking efforts by the British during WWII. As any long time reader of this blog must know, I’m pretty interested in codes and cryptography, and this was a bit of a personal thrill for me.
While we were there, we managed to get demonstrations of a real Enigma machine (very cool) and the modern reconstruction they completed of the Turing Bombe. I shot some video of it using a Canon G11, which isn’t all that great (the sound in particular is pretty terrible) but I thought I’d archive it on YouTube in case anyone else found it of interest. If you get the chance to go to Bletchley, I heartily recommend it: we spent four hours there, and it really wasn’t enough. Besides the Enigma and the Bombe, they have a reconstruction of Colossus, the first electronic digital computer that was used to break the German teleprinter code that the British called “Tunney”. They also have huts filled with many artifacts of the period, including one containing a bunch of radio equipment, dating all the way back to the crystal sets of the 1910s and 1920s to the Piccolo secure modem transceivers that were used in British embassies well past the war. Nifty stuff.
I have some pictures that I’ll get onto Picasa sometime soon of related nifty topics.
Having completed my posting of a new program in celebration of pi day (going back to just spelling it out, since somewhere in the WordPress to Twitter chain, the HTML entities get dropped) I was reading my twitter feed, and found Vi Hart’s amusing video asserting that “Pi is Wrong”. Click through and watch the whole thing.
Her argument is really derived from Michael Hartl’s Tau Manifesto:
Here’s an interesting project: a tube based regenerative receiver that uses an LM386 as an audio amplifier. Wacky. Still, it appears to only use 12v supplies, which may make it a fun and interesting project.
Okay, I’ve been thinking (somewhat abstractly, since I have had relatively little free time lately) about what I eventually want my beacon transmitter to be. One of the issues with it is that I’d like it to be relatively autonomous and lower power: I’d like it to be able to run for weeks at a time without human intervention. I’d also like to use it for WSPR beacon transmissions, much as my original experimentation that I carried out. The trick for making a long term WSPR beacon though is time synchronization: WSPR transmissions begin on even numbered minutes, and must be accurate to a couple of seconds or so. The DS1307 I’ve been playing with only has that accuracy for a day or two: so I need something better. A few ideas leapt to mind:
Temperature control the whole thing. I suspect this only postpones the problem for a period of a week or so, rather than curing it.
Synchronize using a GPS. I’d only need to wake up for a few minutes every 12 hours or so to keep it going. Still, seems pretty high tech.
Add a receiver for 10Mhz time signals. Nice, doable with technology I have on hand, but perhaps needlessly complicated and probably more expensive than…
Using a WWVB integrated receiver. These are about $10, and have very low power consumption. Less homebrew, but perhaps a reasonable choice.
Digikey has ’em in stock for ~$10, falling to about $8 if you order 10.