Monthly Archives: November 2010

Mark’s Bookshelf: Digital Dice by Paul Nahin

Many people use computers to exchange email or pictures, to shop, or even to program for a living. I do all that kind of stuff, but one of the most pleasurable things I do with computers is to use them to answer questions or to gain insight into problems which are too difficult for pen-and-paper analysis.

Digital Dice is a book of probability problems which can be attacked via computer simulation. For the flavor of the sort of problems contained within, consider The Appeals Court Paradox presented in Chapter 16. Quickly, imagine five judges (A, B, C, D, E) who must arrive at a majority decision to overturn a conviction or let it stand. Each justice votes independently, with a probability of yielding the correct decision. For instance, let’s say that A gives the correct decision 95% of the time, with the remaining justices voting correctly 95%, 90%, 90%, and 80%. What is the likelihood that the panel returns the correct decision?

Now, imagine that justice E (who seems to get things wrong much more often than his colleagues) decides to just be lazy and vote along with A (after all, he’s pretty smart, he gets the answer write 95% of the time). What is the probability that the panel returns the correct decision now?

Such problems are fairly hard tedious to work through analytically, but are quite easy to code up as simulations. By tossing dice, and running millions of trials, we can quickly gain insight into a wide variety of problems.

I find this book to be clearly written, and anyone with even a modest amount of programming and mathematical knowledge should be able to complete the projects detailed within. Nahin’s books have been (in my opinion unfairly) criticized by some on Amazon for having the odd typo. I think if that your criticism, you are missing the forest for the trees: the book isn’t meant to be full of code that you type in, it’s meant to challenge you to write your own implementations and experiment. I’ve got several other books by Nahin, and I generally find his style and choice of subject matter to be interesting.

If this is the kind of thing that floats your boat, check it out. Recommended.

The Wobbulator

The Wobbulator is a pretty nifty little video modification gadget. Basically, the idea is that by adding a couple of extra magnetic yokes at odd angles to the conventional yokes on a black and white display tube and driving them with a frequency synthesizer, you can create all kinds of amazing patterns. The results are very cool. It seems to me that a software simulation (while somewhat less cool) could still be a fun and interesting project.

Bookmarked for later consumption:

Blair Neal – Visualist

Crazy Project Idea: Playing with RTL Logic

I’m kind of on a retro computing kick for the last few weeks. I suppose it is mostly because I picked up an FPGA board to experiment with, and I have been reading old books on computer architectures, looking for old fun machines that were simple enough to implement in VHDL/Verilog. This brought me back to the Apollo Guidance Computer which was implemented using mostly 3 input NOR gates implemented as ICs and RTL (not TTL) logic.

So, I looked up what that those look like. And they are actually pretty simple: a simple RTL inverter is just a transistor and two resistors. A NOR gate isn’t much worse. So, I thought about building some HSI “chips” using some breadboards, pins on a 0.2″ spacing, some cheap transistors and resistors. The question is what is the most complex circuit I could actually do with this technique?

Mark’s Bookshelf: The Practice of Programming

Today, the hash function has selected The Practice of Programming by Brian Kernighan and Rob Pike. On my shelf you’ll probably find a couple dozen reference books that are written about various programming languages. This book is one of a smaller handful that literally talks about programming: how to write clear, general, reliable programs that are portable, perform well, and are an actual pleasure to read. I’ve great respect for Kernighan and Pike: their older book The Unix Programming Environment was one of my principle references on Unix when I was still a fledgling acolyte into the ways of Unix. More than just being informational, that book was inspirational. It showed that the Unix environment gave the programmers power to implement powerful new things using powerful mechanisms and simple techniques. It conveyed the appropriate mindset to make maximal use of the operating system facilities.

The Practice of Programming fufills a similar role, but instead of focusing on operating facilities, it focuses on programming languages. It must be confessed that while it does have brief forays into C++ and Java, it’s a book that includes mostly C code. For this, and its concentration on “programming in the small” rather than larger, group based development techniques, it has been labelled as “dated” or “old-fashioned”. I think this is like criticizing a guy who can cut a beautiful dovetail joint by hand for not using glue and nails. To Kernighan and Pike, good programs are actually pleasurable to read, and the vast majority of code doesn’t pass the test, and the larger the program and the greater the number of programmers, the less likely it seems to pass this simple test of aesthetics. Nevertheless, I would submit that there are lessons here for every programmer. The good thing is that each point of style is illustrated with actual code designed to convey the lesson. Thus, you learn what good code looks like via the Socratic Method: by asking questions about code, and stimulating thought and critical thinking about code in all its aspects: from design to implementation to testing and debugging.

If you don’t know C, you might need another book to get off the ground, but The Practice of Programming is a book with lasting value on my shelf.

Addendum: Rob Pike has his own (and in some circumstances) controversial view about what beautiful programs look like. You can see some hint of them in his work on the Go language at Google. One of the explicit design goals for Go was to make programming more fun and more expressive. I think it’s a fascinating new language and points out a bit how programming has been led astray in recent years. If you want a taste of what Go is like, check out his Google Talk:


httpv://www.youtube.com/watch?v=rKnDgT73v8s

Mark’s Bookshelf: Mathematical Recreations by Maurice Kraitchik

Today’s book is Mathematical Recreations by Maurice Kraitchik. As might be evident to long time readers of my blog, I have a lasting interest in what might be called “recreational mathematics”. This is a particularly challenging thing to define, since so much of what we might consider recreational mathematics unveils deep and mysterious things within mathematics. At the Hackers conference I just attended, we paid tribute to the greatest popularizer of recreational mathematics: Martin Gardner. I opined that he was basically invented recreational math, which met (somewhat surprisingly, and I’d still say incorrectly) with some opposition. One of the counterexamples was Maurice Kraitchik.

Many so-called recreational mathematics books are little more than shallowly described algebra problems. Kraitchik’s Mathematical Recreations rises significantly above that. First published in 1942, it includes a wide variety of topics, including gambling, games, magic squares, number theory, geometry, cryptarithmetic, and permutations. You’ll see some things which you’ve seen before, but probably a few gems that haven’t seen wide coverage. For instance, I found this problem to be fairly nice, from page 140.

A man bets 1/m of his fortune on each play of a game whose probability is 1/2. He tries again and again, each time staking 1/m of what he possesses. After 2 n rounds, he has won n times and lost n times. What is his result?

I’ll leave it to the reader to work out the result, but it does point out clearly that proportional betting isn’t the way to fame and fortune, at least, not if the game is fair (the payout is equal to the odds).

The section on calendars is pretty nice too, including a good description of Gauss’ method for computing calendrical dates, and including a nice nomograph that implements a perpetual calendar. Very good.

Worth having on your shelf.

Contrail Science Overflow

The intertubes are all a-twitter (is Twitter a-twitter?) with the video of what appeared to be a missile launch off the coast of Los Angeles yesterday. It did look pretty weird, but the evidence is mounting that this was not any kind of missile launch, but in fact just the contrails of flight AWE808 from Hawaii to Phoenix. You can read a bunch of terrific evidence at the link below:

Contrail Science Overflow.

Mark’s Bookshelf: Build It Yourself! PVC Rocket Engine

For whatever reason, I have been finding it difficult to find time to blog. Perhaps it is a combination of increased amount of travel, work, or just the shorter daylight hours, but I’m finding it hard to find the muse. So, for the next little while, I thought I’d scan through my bookshelf and find some of the stranger books that I’ve acquired, and present them with little mini-reviews and hope that some discussion and inspiration occurs.

The first of these is Dan Pollino’s Build It Yourself! PVC Rocket Engine.

Okay, first the disclaimer: I haven’t ever built any of these, and don’t really know that much about them. Even without the addition of potassium nitrate, molten sugar can burn you fairly badly if you aren’t careful. Witness the bandaids on legendary hacker Jeri Ellsworth if you need more caution, but be careful: it might serve as inspiration.


httpv://www.youtube.com/watch?v=XjIxYQAO8Pk

For all I know, you might end up on some watchlist for buying the book as well, although I’ve flown a bunch of times with only the usual annoyances, and no fresh ones. YMMV.

Dan has a website. Sadly, it seems to be mostly their to entice you into buying his book, with relatively little “free” information, but if you follow his links page you can probably find some more information, although a disconcerting fraction of them seem to be dead links. (You might try James Yawn’s Sugar Rocket link page for more information.) The book itself is however quite good. You might be scared a tiny bit by the appearance of CAUTION! on very nearly every page of the book, but it presents all the stages of making a powerful rocket engine in very clear detail, including modelling the nozzle, the core, the igniters and the fuel. If I have a criticism, it’s that the book doesn’t present any theory or chemistry, it’s literally just a HOWTO guide for building rocket engines. It also doesn’t even try to cover the legality of making or using these rockets. But it delivers on what it promises: a clear guide that shows you how to make and test a rocket engine which is probably significantly more powerful than the ESTES rockets you might have flown as a kid.

I’m not sure I’ll ever get to building one of these, but it is fascinating.

Do Americans Eat 3,790 Calories Per Day?

A couple of times in the last few months, I’ve seen this very odd statistic that the average American consumes 3790 calories per day. This is usually used in conjunction with some argument about why Americans are overall so obese. For instance, here is one such thing from diet-blog.com.

Do Americans Eat 3,790 Calories Per Day?

I had thought that this number can’t possibly be correct: if it were, we wouldn’t have a 35% obesity rate, we’d effectively all be dead from exploding. As an adult male my age, you might anticipate that my caloric needs would amount to (say) 2400 calories a day (I’m a tall guy). That means that if I consumed this ‘average’, that I would have a caloric surplus of 1390 calories every day. There are about 3500 calories in a pound of fat. That means I’d gain a pound every two and a half days. 144 pounds every year! That can’t possibly be correct.

Until recently, I didn’t research or spot the error, but now I can tell you what it is.

The “consumption” referred to here isn’t the amount eaten. It’s merely the amount food which is available to consumers. From the USDA’s FAQ on the source of these numbers:

Q: Does the Food Availability (Per Capita) Data System measure actual consumption?

A: No. The data system, which consists of three data series, does not measure actual consumption or the quantities ingested. The data are not based on direct observations of consumption or on survey reports of consumption. They are calculated by adding total annual production, imports, and beginning stocks of a particular commodity and then subtracting exports, ending stocks, and nonfood uses. Per capita estimates are calculated using population estimates for that particular year. However, ERS’s food availability (per capita) data are useful for economic analysis because they serve as indirect measures of trends in food use. In other words, the Food Availability (Per Capita) Data System provides an indication of whether Americans, on average, are consuming more or less of various foods over time.

Roughly speaking, this 3790 calorie per day number is derived from the vast surplus of food which is produced and sent to consumers. It has nothing to do with the amount that is actually observed to be eaten.

Sadly, I see this number echoed time and time again on diet blogs, usually in the form of cautionary tales about ‘someone I know who probably does eat that much’. It’s nonsense.

Congratulations to the 2010 World Series Champion San Francisco Giants

Okay, I’m really more of an Oakland fan, but ultimately I’m a baseball fan, and the Giants provided one of the most compelling post-season runs in recent memory. Narrowly winning the pennant on the last day of the season, avoiding a potential three-way tiebreaker. Agonizing, one inning games in round one against Atlanta. A great victorious round as underdogs against the Phillies. And ultimately triumphing over the Texas Rangers (again as underdogs) in five games to capture the series.

It’s really difficult to win the World Series. Among professional sports, baseball has the greatest parity: the best teams might win 60% of their games or so. The idea that the best team will prevail just isn’t that likely in reality (ask the Yankees). But this year, from September until the culmination on October 1, the Giants were arguably the best team in baseball, and certainly provided this fan with some amazing baseball moments.

Congratulations to Edgar Renteria for winning the World Series MVP as well. A fitting end to a great career.

The BPSK1000 Telemetry Modem for ARISSat-1

The legendary Phil Karn, KA9Q is apparently the brains behind the digital telemetry modem that will be used aboard the ARISSat-1, a satellite designed to be tossed off the ISS sometime next year. From his paper:

ARISSat-1 will carry a new telemetry modulation and coding scheme, BPSK1000, designed to handle the severe fading often encountered with low orbit satellites without attitude control. Its performance and the link budgets for the ARISSat-1 spacecraft are such that reliable reception should require only a simple whip or ground plane antenna, a conventional 2m SSB receiver, and a reasonably modern personal computer with audio A/D input.

BPSK1000 uses differential binary phase shift keying (DBPSK) at a channel symbol rate of 1 kHz in a SSB bandwidth. With constraint length 7, rate ½ forward error correction (FEC), the user data rate is about 500 bits/sec. HDLC framing provides application flexibility (including the ability to carry AX.25 in other applications) and a deep (16 second) convolutional interleaver provides strong protection against fading.

The BPSK1000 Telemetry Modem for ARISSat-1

I wasn’t able to find any links to the software, but I’m not panicking, since the launch is several months away, but worth looking into.

While googling, I found that KE9V had mentioned this a couple of days ago. One of his comments asked “Why not just use AFSK/FM like all the traditional packet satellites?” The reason is simple: it doesn’t work as well. The way these things are usually evaluated is in terms of Eb/N0, which is the “energy per bit, per unit of spectral noise density”, usually defined for a given error rate (10^-5 seems typical). For AFSK/FM at 1200 baud, Eb/N0 works out to 24db. For Karn’s proposed technique, it works out to a mere 6.7db. In practical terms, this means the satellite can operate at much lower power levels (it’s going to only be sending out 100mw) and the signal can still be reliably detected by omnidirectional antennas.

Very cool.

Somewhere… over the (simulated) rainbow revisited…

A couple of months ago, I did some simple simulations of light refracting through raindrops in a hope to understand the details of precisely how rainbows form. The graphs I produced were kind of boring, but they did illustrate a few interesting features of rainbows: namely, the double rainbow, and the formation of Alexander’s band, the region of relatively dark sky between the two arcs.

But the pictures were kind of boring.

So, today I finally got the simulation working, did a trivial monte carlo simulation tossing fifty million rays or so, and then generated a picture by converting the spectral colors to sRGB. Et voila!

Horribly inefficient really, but satisfying result.

Addendum: In comparing the results to actual photographs of double rainbows, it seems like my pictures are scaled somewhat strangely (the distance between the arcs seems large compared to the radius). I suspect this is mostly due to the linear projection that I used and the very wide field (the field of view of the camera is nearly 100 degrees, which compresses the center and expands the outside. I’ll have to make some narrower field pictures tomorrow when my mind can handle the math, I’ve reached my limit tonight.