I suspect the world would be better if that percentage were even greater.
Tubes? Who uses tubes anymore?
For the last week, I’ve been embarking on a ham radio “trip down memory lane”. Well, it would be memory lane if I had any real memories of the tube-based equipment that were staples of the ham shack until probably the 1970’s or so. But if I have a personal philosophy of my little projects, it’s that one has to look back to gain perspective about our current technology. Or something like that. Perhaps that’s just a rationalization for spending time reading old books about vacuum tube design. Or perhaps this is all motivated by the idea of building a radio that glows from scratch. Or, perhaps to demonstrate that I understand the similarities between tubes and FETs.
Whatever the motivation, I’ve been looking around at projects that people have done. A popular project seems to be “twinplex” radio, which uses a single dual triode tube in a regenerative receiver. Staring at circuits and reading Basic Theory and Application of Electron Tubes, I’m beginning to understand the functioning of these circuits. And, it turns out youtube has a lot of nice inspirational videos of people’s projects, such as the following:
httpv://www.youtube.com/watch?v=PYeB7nGwGv0
Now that the winter is over, our local flea markets at Livermore and De Anza should be starting up again soon. Perhaps I’ll keep an eye out for the components.
Addendum: Here’s another link for inspiration.
Comments
Comment from Kragen Javier Sitaker
Time 2/28/2010 at 12:53 am
Oh, and thank you for the awesome etext pointers!
Comment from Alan Yates
Time 2/28/2010 at 4:28 am
I think the main issue with making tube-based computers go fast was the physical size of the technology. A full-adder using bulky empty-state technology would have been many racks of equipment and keeping it all working synchronously at hundreds of MHz would have been an enormous challenge.
Even if you dealt with the skew and latency problems with precision geometry, vacuum tube technology is fundamentally fairly high-Z, which is challenging to integrate with the large parasitic reactances associated with physically large machines. Thermal issues were already a problem and jamming it all closer together while driving the larger parasitics would have made it worse, the energies involved would have been even more enormous. Making stuff run really fast is much easier when it is tiny as the propagation delays and parasitics are less significant.
Even when discrete transistors came along the machines were still large and slow by modern standards. IBMs SLT (1960s) was a step towards integrating the machine elements into smaller packages, but it was still bulky. MSI & LSI brought us machines like the Cray Y-MP (late 80s), still huge, but fast because it was massively parallel and built around transmission line principles to keep skew and latency under control. VLSI packed full microprocessor systems onto a single die and gave us the 1980s consumer PC boom. These machines were cheap and reliable enough for consumer products purely because the electronics was small enough to largely ignore the speed of light at their fairly low clocks rates, but still fast enough to be useful machines.
30 years later we are into the limitations of the technology again. On-die skew and propagation delays dominate the design of the chips themselves and interconnects are more and more designed as transmission lines. Modern RAM technology like DDRx (and all those evil Rambus patents) is all about getting as much performance out of the imperfect and physical extent-significant connections between the devices that implement the machine. We cheat the laws of physics a bit by making the machine’s internal connections more and more asynchronous and serial in nature while doing more per tick with complex parallelism. Ideas that simply had not been thought of yet back in the vacuum tube days, and would not have been practical to implement then anyway. The relative cheapness of the photolithgraphic production technology of solid state digital electronics lets us build increasingly complicated machines very small. Neither the complexity or tiny extent used to achieve the excellent performance of modern computer hardware was feasible with vacuum tubes.
Still we hold to Moore’s Law largely by architectural improvements. The physics of the switching elements is the next limit. Once you get down to a few atoms per switch and a few electrons per signal quantum effects ruin your ability to build a working machine. Still we’ve been saying we’ve hit “The Limit” for so long now it is tempting to believe there is no limit. In a sense this may be true, our ingenuity in squeezing more and more computational performance out of less and less mass-energy seems to have little bound.
Comment from Kragen Javier Sitaker
Time 2/28/2010 at 5:36 am
A full adder isn’t “many racks of equipment”. It’s two half-adders and an OR gate. A half-adder is an XOR gate and an AND gate. I think you can more or less configure a pentode as either an XOR gate or an AND gate. So I’m guessing a full-adder is somewhere around six vacuum tubes and six capacitors. There were entire *computers*, like the LGP-30, that had under 100 vacuum tubes, and even fairly fast computers like the Bendix G-15 that had under 500. So I’m not proposing they should have built *bigger* computers. I’m proposing they should have built *smaller* ones in which the tubes switched more often, and I don’t know why they didn’t. Was it a lack of theory? (As you point out, asynchronous logic is still relatively underdeveloped even today.) A lack of fast storage devices? (There’s no way you could get megabits per second out of the drums of the time.)
Microwave-frequency circuitry built out of vacuum tubes — with coaxial transmission lines of carefully matched lengths, etc. — wasn’t a new problem in the late 1950s. There had been radar systems of some complexity since the 1930s, which I believe is what the 933 was developed for.
Comment from Kragen Javier Sitaker
Time 2/28/2010 at 5:46 am
I meant 955, not 933, sorry.
Comment from Kragen Javier Sitaker
Time 2/28/2010 at 5:56 am
…also, in Turing’s note proposing the building of the ACE, he specifically calls out the delay lines used in radar systems as a possible memory, which were in fact what were used in the Pilot ACE. Apparently radar systems used them to screen out stationary clutter.
Comment from Alan Yates
Time 3/1/2010 at 5:45 am
Sure, you only need a few tubes to full-add a few bits, but when you get up to usable word sizes the carry propagation becomes an increasingly difficult thing to manage. Remember that the carry-out can’t be computed until the carry-input and the two operands are stable, and letting that ripple down multiple full-adders takes time. There are lots of ways to improve the situation, with carry look ahead, and different ways of slicing up the logic to reduce propagation delays, but the circuit complexity rises very fast.
I guess you could “microcode” away that problem with tight hardware loops ripple-carry-adding multi-bit words, but then you are basically trading cycles for hardware complexity again (effectively dividing your clock rate by word size). Registers would still be required (say 2 tubes per bit) and all the control logic to orchestrate the machine operation. Control logic is often a killer, even with microcoding simplified to essentially a look-up table and multiplexers. I guess you could build a rectifier table decoder, not sure how fast you could make it work though.
What you suggest isn’t so crazy though. Serial CPUs exist, there are plenty of them used as toy-cores in FPGA for house keeping and minor computations, but their performance is nothing to write home about – still a good engineering solution though because they are fast-enough and take up relatively few cells.
In guess you could try building a tube machine along these lines. 🙂
Might want to start with 7400-series logic to begin with, allowing you to debug the general concept cheaply, then port whatever gates you use to equivalent tube implementations. If you can get the 7400-series version running at more than 50 MHz then maybe with enough time an effort the vacuum tube version could be made to work.
Comment from Kragen Javier Sitaker
Time 3/1/2010 at 11:37 pm
Drum computers (like the IBM 650 and 704, the LGP-30, and the G-15) and delay-line computers (like the ACE and UNIVAC) very commonly worked bit-serially, which eliminates the carry path length problem. There’s no obvious reason why you couldn’t store the bits of a word on adjacent tracks of a drum instead of bit-serially on a single track, but I think it was very atypical to do so.
There were some tubes which could store multiple bits in one tube; I think Dekatrons, which could store almost 4 bits, were the most common of these. And of course there were the 1024-bit Williams tubes. But both Dekatrons and Williams tubes were really slow, around 10 000ns, the Dekatrons because they’re gas tubes and the Williams tubes, well, I don’t know why.
ROM lookup tables are labor-intensive to make by hand, but fairly inexpensive and very reliable; for N words of M bits, you need about NM/2 diodes (semiconductor diodes were used in radios before 1910, and good, cheap ones were available from about 1950) and 2M decoders of ?N outputs each (ideally, M of them sourcing current on their outputs and M sinking it, but otherwise you can use an extra transistor or triode per output on M of them). The AGC used “rope memory”, which I think used a single N-way decoder instead of 2M ?N-way decoders, and ferrite cores instead of diodes, but the principle was the same.
What do you mean by a “rectifier table decoder”?
(You may be able to dispense with the decoders if you have an already-decoded input handy, like the output of a Dekatron. I’ve been trying to figure out how hard it would be to build an arbitrary finite-state machine of up to ten states out of a Dekatron and some handmade diode ROM. I think you’d need at least ten more amplifiers (e.g. power transistors or triodes) to pull the new cathode of the Dekatron below zero, and you might need an additional Dekatron to latch the old output during the transition. But Dekatrons are really slow, anyway.)
Your suggestion of prototyping in ICs is a good one, but I think 7400 might be the wrong series to use; even a 74S04 typically has a 3ns propagation delay, which means that you’re going to go too much above 300MHz even with a single-gate path-length, and a plain 7404 is quite a bit worse. Also, they’re a lot less finicky about low-current EMI than CMOS and, presumably, than vacuum tubes, since both IGFETs and “Audions” are basically capacitive-input devices, and vacuum tubes typically require quite high voltages, so they might not flush out certain issues. Unfortunately, even modern 74HC04s seem to be pretty slow, like 8ns: apparently 8× slower than the 955 triode from 1933. (But that 500MHz number probably means it can linearly amplify a 500MHz sine wave; can you get it to do something noticeably nonlinear a billion times a second? I have no idea. It might take a little longer to saturate it. Turing’s ACE notes give a number of 8ns, but I suspect that the ACE, like most vacuum tube machines, wasn’t built with acorn tubes.)
I don’t know if you’ve seen this, but Tom Jennings designed and, I think, started building a small, very slow tube computer in the last few years, called the Universal Machine. I think he might not have been doing much on it lately.
It seems like, for machines operating at microwave frequencies, electrical delay lines might be superior to latches and cores for registers. Apparently you can buy 500 feet of cable-TV cable for US$40 now, and I think the prices on alibaba.com can go down to a quarter of that. At 1Gbps (at which speed you’d have to splice in some amplifiers if you use ordinary TV cable) that would be about 600 bits, and at 100Mbps it would be about 60 bits. A few spools of that would give you some pretty serious register capacity.
Turing actually considered electric delay lines for the ACE, although his notes suggest he was considering doing FDM (presumably CW?) around 30GHz in a copper waveguide, not just dumping unmodulated pulses one at a time into a bunch of coaxial cables. His survey table shows them as better than acoustic delay lines in every way, often by an order of magnitude, except for being twice as expensive. Yet he devotes 11 pages of the proposal to explaining how to make acoustic delay lines work, and nine words to electric delay lines.
Are you familiar with WireWorld? It’s a toy, a cellular automaton for digital logic; being a CA, it incorporates transmission-line delay naturally. A few years back I built a bit-serial full-adder in it, but with the propagation delays of the gates and the transmission delays, it took about 21 generations for the carry to cycle back around and be ready for the next bit. But each gate could process a pair of bits every 4 generations smoothly. (As you can imagine, this took quite a bit of tweaking of the transmission line lengths.)
It turned out that you could feed five bit-interleaved pairs of numbers through it, bit-serially, and it would correctly produce their five sums bit-interleaved on its output.
I doubt I’ll ever work with a logic family in real life where that trick works in exactly that way. Signals in real cables aren’t pure, single-directional, and self-reshaping; they’re fuzzy and get fuzzier as they travel, they slosh back and forth in the transmission line whenever they encounter the slightest change in impedance, they ring in weird places, they jump from one line to another, they glitch from timing skew, and so on. But it was still inspiring, seeing one time-shared adder do the job of five.
Comment from Kragen Javier Sitaker
Time 3/2/2010 at 8:06 pm
“?N” is supposed to be “the square root of N”. Looks like there’s a charset problem.
Comment from Alan Yates
Time 3/8/2010 at 4:05 pm
Two words “build it”. 🙂
I’d love to see a machine like that working.
Yes I meant diode ROM decoders, rectifier being an attempt at a generic term for any particular implementation technology. I realise solid state rectifiers have been available for a very long time and wouldn’t penalise any implementation that used them along with vacuum tubes.
I am indeed familiar with WireWorld. I’ve built some trivial stuff in it, and always marveled at what others have managed to build with it.
Core rope ROM is probably way too slow for the kind of computer you are talking about. Braiding core rope ROM would be a PITA compared to a diode matrix. Of course diodes have speed limits too.
I’d still start with whatever 7400 series logic family you can get in the quantities required at a reasonable price. First make it work, then make it fast.
Alternative technologies you have hinted at using RF lines and non-linear elements as switches would be interesting, but I suspect extremely frustrating to build and debug. At least something like that would be vaguely similar to optical computers and could test the general concepts.
Comment from Kragen Javier Sitaker
Time 2/28/2010 at 12:52 am
I’ve been thinking a lot about tubes recently. The 955 acorn tube came out in 1933 and could amplify a 500MHz signal then; so why were tube computers so slow? You’d think that would allow you to run a bit-serial full-adder at at least a hundred megabits per second, but actual machines of the 1950s ran at more like a hundred kilobits. I’m pretty ignorant about vacuum tubes and microwave circuit design, so it could be something really obvious. Any idea?