# brainwagon"There is much pleasure in useless knowledge." — Bertrand Russell

26May/163

## My Fizz Buzz solutions…

Joel Grus blogged about a job interview where he was asked about the ridiculous Fizz Buzz question that apparently some companies use to weed out complete frauds from their job interview process. The task is to create a program which prints the numbers from 1 to 100, but where all numbers which are a multiple of three are replaced with "fizz", all numbers which are multiples of five are replaced with "buzz" and all numbers which are multiples of both three and five are replaced by "fizzbuzz".

Joel somewhat comically decided to use the TensorFlow framework and solve it with machine learning. Go read his post, it's pretty comical. Well, if you are a programming geek, it will seem comical.

For the non programmers in my audience (are there some?) this is a trivial program to write. If a programmer can't bust out a solution to this in a minute or two, they likely have no business calling themselves a programmer. Here's my 30 second solution (I type moderately slow):

```#include <stdio.h>

int
main()
{
int i ;
for (i=1; i<=100; i++) {
if (i % 15 == 0) {
printf("fizzbuzz\n") ;
} else if (i % 3 == 0) {
printf("fizz\n") ;
} else if (i % 5 == 0) {
printf("buzz\n") ;
} else {
printf("%d\n", i) ;
}
}
return 0 ;
}
```

No doubt there are some practitioners in the audience who think that this inefficient, because I compute multiple modulo operations per loop iteration. I'd probably argue that this is short and clear, and until profiling revealed it to be a performance problem, I wouldn't change it. If people insisted that I do a single modulo per loop iteration, I'd probably replace it with something like:

```#include <stdio.h>

int
main()
{
int i ;
for (i=1; i<=100; i++) {
switch (i % 15) {
case 3:
case 6:
case 9:
case 12:
printf("fizz\n") ;
break ;
case 5:
case 10:
printf("buzz\n") ;
break ;
case 0:
printf("fizzbuzz\n") ;
break ;
default:
printf("%d\n", i) ;
break ;
}
}
return 0 ;
}
```

I'd submit that it is much less clear, but not too bad. I'd probably add a comment to explain the higher level idea.

If we are allowed two modulo calls per iteration, could do it this way:

```int
main()
{
int i ;
for (i=1; i<=100; i++) {

bool mul3 = (i % 3 == 0) ;
bool mul5 = (i % 5 == 0) ;

if (mul3 && mul5) {
printf("fizzbuzz\n") ;
} else if (mul3) {
printf("fizz\n") ;
} else if (mul5) {
printf("buzz\n") ;
} else {
printf("%d\n", i) ;
}
}
return 0 ;
}
```

I used boolean variables, which I think reads a little better, but you could obviously use integers.

What if you don't want to any modulus calls? Well, you could use a state machine...

```#include <stdio.h>

int
main()
{
int i = 1 ;
int state = 1 ;

while (i <= 100) {
switch (state) {
case 3:
case 6:
case 9:
case 12:
printf("fizz\n") ;
break ;
case 5:
case 10:
printf("buzz\n") ;
break ;
case 15:
printf("fizzbuzz\n") ;
state = 0 ;
break ;
default:
printf("%d\n", i) ;
}
state ++ ;
i++ ;
}
return 0 ;
}
```

I hope I never have an interview where this is the kind of question I get asked.

7Aug/1210

## Realization: I’m a dinosaur…

I'm one of these...

About once a year, I get the urge to push my programming skills and knowledge in a new direction. Some years, this results in some new (usually small) raytracers. Sometimes, it results in a program that plays a moderately good game of checkers. Other years, it results in code that predicts the orbits of satellites.

For quite some time, I've been pondering actually trying to develop some modest skills with GPU programming. After all, the images produced by state of the art GPU programs are arguably as good as the ones produced by my software based raytracers, and are rendered factors of a hundred, a thousand or even more faster than the programs that I bat out in C.

But here's the thing: I'm a dinosaur.

I learned to program over 30 years ago. I've been programming in C on Unix machines for about 28 years. I am a master of old-school tools. I edit with vi. I write Makefiles. I can program in PostScript, and use it to draw diagrams. I write HTML and CSS by hand. I know how to use awk and sed. I've written compilers with yacc/lex, and later flex/bison.

I mostly hate IDEs and debuggers. I've heard (from people I respect) that Visual Studio/Xcode/Eclipse is awesome, that it allows to edit code and refactor, that it has all sorts of cool wizards to write the code that you don't want to write, that it helps you remember the arguments to all those API functions you can't remember.

My dinosaur brain hates this stuff.

By my way of thinking, a wizard to write the parts of code you don't want to write is writing code that I probably don't really want to read either. If I can't keep enough of an API in my head to write my application, my application is either just a bunch of API calls, inarticulately jammed to together, or the API is so convoluted and absurd that you can't discern it's rhyme or reason.

An hour of doing this kind of programming is painful. A day of doing it gives me a headache. If I had to do it for a living, my dinosaur brain would make me lift my eyes toward the heavens and pray for the asteroid that would bring me sweet, sweet release.

Programming is supposed to be fun, not make one consider shuffling off one's mortal coil.

Okay, that's the abstract version of my angst. Here's the down to earth measure of my pain. I was surfing over at legendary demoscene programmer and fellow Pixarian Inigo Quilez's blog looking for inspiration. He has a bunch of super cool stuff, and was again inspiring me to consider doing some more advanced GPU programming. In particular, I found his live coding thing to be very, very cool. He built an editing tool that allows him to type in shading language code and immediately execute it. It seemed very, very cool. Here's an example YouTube vid to give you a hint:

I sat there thinking about how I might write such a thing. I didn't feel a great desire to write a text editor (I think I last did it around 1983) so my idea was simple: design a simple OpenGL program that drew a single quad on the screen, using code from a vertex/fragment shader that I could edit using good old fashioned vi. Whenever the OpenGL program noted that the saved version of these programs had been updated, it would reload/rebind the shader, and excecute it. It wouldn't be as fancy as Inigo's, but I figured I could get it going quickly.

While I have said I don't know much about GPU programming, that strictly speaking isn't true. I did some OpenGL stuff recently, using both GLSL and even CUDA for a project, so it's safe to say this isn't exactly my first rodeo. But this time, I thought that perhaps I should do it on my Windows box. After all, Windows probably still has the best support for 3D graphics (think I) and it might be of more use. And besides, it would give me a bit broader skill base. Not a bad thing.

So, I downloaded Visual Studio 2010. And just like the Diplodocus of old, I began to feel the pain, slowly at first, as if some small proto-mammals were gnawing at my tail, but slowly growing into a deep throbbing in my head.

On my Mac or my Linux box, it was pretty straightforward to get OpenGL and GLUT up and running. Being the open source guy that I am, I had MacPorts installed on my MacBook, and a few judicious apt-get installs on Linux got all the libraries I needed. On the Mac, the paths I needed to know were mostly /opt/local/{include/lib} and on the Linux box, perhaps /usr/local/{include/lib}. The same six line Makefile would compile the code that I had written on either platform if I just changed those things.

But on Windows, you have this "helpful" IDE.

Mind you, it doesn't actually know where any of these packages you might want to use live. So, you go out on the web, trying to find the distributions you need. When you do find them, they aren't in a nice, self-installing format: they just naked zip files, usually with 32 bit and 64 bit versions, and without even a good old .bat file to copy them to the right place for you. On Mac OS X/Linux, I didn't even need to know if I was running 64 bit or 32 bit: the package managers figured that out for me. On Windows, the system with the helpful IDE, I have to know that I need to copy the libs to a particular place (probably something like \Program Files(x86)\Microsoft SDKs\Windows\v7.0A\Lib) and the include files to somewhere else, and maybe the DLLs to a third place, and if you put a 32 bit DLL where it expected a 64 bit one (or vice versa) you are screwed. But even after dumping the files in those places, you still have to configure your project wizard to add these libraries, by tunneling down through the Linker properties, under tab after tab. Oh, and these tabs where you enter library names like freeglut32.lib? They don't even bring up the file browser, not that you would really want to go grovelling around in these directories anyway, but at least there would be a certain logic to it.

And then, of course, you can start your Project. Go look up a tutorial on doing a basic OpenGL program in Visual C++, and they'll tell you to use the Windows Wizard to create an empty project: in other words, all that vaunted technology, and they won't even write the first line of code for you for your trouble.

After all this, I got to the point where I could hit F5 to compile, and what happened? It failed to compile my simple (and proven on other operating systems) code, with the message:

`Application was unable to start correctly (0xc000007b)`

You have got to be kidding me. When did we transport ourselves back to 1962 or so when a numerical error code might have been a reasonable choice? If your error code can't tell you a) what is wrong and b) how to fix it, it's absolutely useless. You might as well just crash the machine.

This was the start of hour three for me. And it was the end of Visual Studio. I must compliment Microsoft on their uninstaller: it worked perfectly, the first actual success of the day.

Undoubtedly some of you out there are going to proclaim that I'm stupid (probably true) or inexperienced with it (certainly true) and that if I just kept at it, all would be well. These are remarkably similar to the claims that I've heard from other religions, and I have no reason to believe that the "Faith of Video Studio" will turn out any better than any of the other religions.

Perhaps before you post, you should just consider that I warned you that I was a dinosaur when I began, and perhaps I was right, at least about that.

Luckily, along with this project, I've been thinking of another dimension along which I could develop some new skills, and it turns out that this stuff appears to be more in line with my dinosaur sensibilities, and that's the Go Programming Language.

Go has some really cool ideas and technology inside, but at it's core, it's simply seems better to my dinosaur brain. The legendary Rob Pike does a great job explaining it's strengths, and almost everything he says about Go resonates with me and addresses the subject of the long rant above. Here's one of his great talks from OSCON 2010:

I know it will be hard to get my mind wrapped around it, but I can't help but think it will be a more pleasant experience than the three hours I just spent.

I know the GPU thing isn't going to go away either. I just kind of wish it would.

6Apr/120

## My first crude DCPU-16 simulator…

A few days ago, I mentioned that @notch, the creator of Minecraft, had a new idea for a space game that he was beginning to work on. One of the features of this game is that your spaceship would be controlled by a simulated computer that you could program. He released a preliminary specification for the 16 bit machine called DCPU-16. I like writing things like this, so I spent a couple of hours and hacked the simple core of one together. It doesn't yet properly count cycles, but it does implement the basic instruction set, and executes the sample program that he provided in the notes. As yet, he hasn't specified how things like I/O will work (I suspect some of the type 0 reserved opcodes will be used), so it's not of much use, but it might serve as the basis for others to explores.

Addendum: The example program implements a simple loop with the target :loop, but I suspect the bottom of the loop should be an IFE rather than an IFN instruction, otherwise, the loop simple exits on the first iteration.

```#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <math.h>

/*
* An implementation of Notch's DCPU-16 specification, version 1.1
* Written by Mark VandeWettering.
*/

/* this has to be 16 bits, because we aren't going to mask increments
* and decrements.
*/
typedef unsigned short Word ;
#define MEMSIZE (65536)
Word M[MEMSIZE] ;
Word R[8] = {0, 0, 0, 0, 0, 0, 0, 0} ;
Word PC = 0 ;
Word SP = 0 ;
Word O = 0 ;
Word W = 0 ;
unsigned long cycles = 0L ;

#define OP(w)                   ((w)&(0xF))
#define FIELDA(w)               (((w)>>4)&(0x3F))
#define FIELDB(w)               (((w)>>10)&(0x3F))

unsigned char *Rn[] = { "A", "B", "C", "X", "Y", "Z", "I", "J"} ;

Word
DecodeSRC(Word v)
{
Word EA ;
if (v < 0x8) {
return R[v] ;
} else if (v < 0x10) {
return M[R[v&0x7]] ;
} else if (v < 0x18) {
EA = M[PC++] ;
EA += R[v&0x7] ;
return M[EA] ;
} else if (v < 0x20) {
switch (v) {
case 0x18:
return M[SP++] ;
case 0x19:
return M[SP] ;
case 0x1a:
return M[--SP] ;
case 0x1b:
return SP ;
case 0x1c:
return PC ;
case 0x1d:
return 0 ;
case 0x1e:
return M[M[PC++]] ;
case 0x1f:
return M[PC++] ;
default:
abort() ;
}
} else if (v < 0x40) {
return v - 0x20 ;
}
abort() ;
}

Word *
DecodeDST(Word v)
{
Word EA ;
Word *T ;
if (v < 0x8) {
return R+v ;
} else if (v < 0x10) {
return M+(v&0x7) ;
} else if (v < 0x18) {
EA = M[PC++] ;
EA += R[v&0x7] ;
return M + EA ;
} else if (v < 0x1f) {
switch (v) {
case 0x18:
return M+(SP++) ;
case 0x19:
return M+SP ;
case 0x1a:
return M+(--SP) ;
case 0x1b:
return &SP ;
case 0x1c:
return &PC ;
case 0x1d:
return &W ;
case 0x1e:
return M+PC++ ;
case 0x1f:
return &W ;
default:
abort() ;
}
} else if (v < 0x40) {
return &W ;
}
abort() ;
}

int
SkipArg(Word arg)
{
if (arg >= 0x10 && arg < 0x18)
return 1 ;
if (arg == 0x1e)
return 1 ;
if (arg == 0x1f)
return 1 ;
return 0 ;
}

void
Skip()
{
Word I = M[PC++] ;
Word A = FIELDA(I) ;
Word B = FIELDB(I) ;
PC += SkipArg(A) ;
PC += SkipArg(B) ;
}

#define DEBUGPRINTF(x)

void
execute()
{
Word I ;
Word T ;
Word B ;
Word * A ;
uint32_t a, b ;

I = M[PC] ;
PC ++ ;

if (OP(I) != 0)
A = DecodeDST((I>>4)&0x3f) ;
B = DecodeSRC((I>>10)&0x3f) ;

switch (OP(I)) {
case 0x0:
switch (((I)>>4)&0x3F) {
case 0x01:
/* JSR */
DEBUGPRINTF("... JSR") ;
/* Push the PC on the stack */
M[--SP] = PC ;
PC = B ;
break ;
default:
abort() ;
}
break ;
case 0x1:
/* SET */
DEBUGPRINTF("... SET\n") ;
*A = B ;
break ;
case 0x2:
a = *A ;
b = B ;
*A = (a + b) ;
O = (a + b) >> 16 ;
break ;
case 0x3:
/* SUB */
DEBUGPRINTF("... SUB\n") ;
a = *A ;
b = B ;
*A = (a - b) ;
O = (a - b) >> 16 ;
break ;
case 0x4:
/* MUL */
DEBUGPRINTF("... MUL\n") ;
a = *A ;
b = B ;
*A = (a * b) ;
O = ((a * b)>>16) ;
break ;
case 0x5:
/* DIV */
DEBUGPRINTF("... DIV\n") ;
a = *A ;
b = B ;
if (b == 0) {
*A = O = 0 ;
} else {
*A = a / b ;
O = ((a<<16)/b) ;
}
break ;
case 0x6:
/* MOD */
DEBUGPRINTF("... MOD\n") ;
a = *A ;
b = B ;
if (b == 0)
*A = 0 ;
else
*A = a % b ;
break ;
case 0x7:
/* SHL */
DEBUGPRINTF("... SHL\n") ;
a = *A ;
b = B ;
*A = a << b ;
O = (a << b) >> 16 ;
break ;
case 0x8:
/* SHR */
DEBUGPRINTF("... SHR\n") ;
a = *A ;
b = B ;
*A = a >> b ;
O = (a << 16) >> b ;
case 0x9:
/* AND */
DEBUGPRINTF("... AND\n") ;
*A &= B ;
break ;
case 0xa:
/* BOR */
DEBUGPRINTF("... BOR\n") ;
*A |= B ;
break ;
case 0xb:
/* XOR */
DEBUGPRINTF("... XOR\n") ;
*A ^= B ;
break ;
case 0xc:
/* IFE */
DEBUGPRINTF("... IFE\n") ;
if (*A == B) Skip() ;
break ;
case 0xd:
/* IFN */
DEBUGPRINTF("... IFN\n") ;
if (*A != B) Skip() ;
break ;
case 0xe:
/* IFG */
DEBUGPRINTF("... IFG\n") ;
if (*A > B) Skip() ;
break ;
case 0xf:
/* IFB */
DEBUGPRINTF("... IFB\n") ;
if ((*A & B) != 0) Skip() ;
break ;
}
DEBUGPRINTF("\n") ;
}

Word prog0[] = {
/* 0000 */ 0x7c01, 0x0030, 0x7de1, 0x1000, 0x0020, 0x7803, 0x1000, 0xc00d,
/* 0008 */ 0x7dc1, 0x001a, 0xa861, 0x7c01, 0x2000, 0x2161, 0x2000, 0x8463,
/* 0010 */ 0x806d, 0x7dc1, 0x000d, 0x9031, 0x7c10, 0x0018, 0x7dc1, 0x001a,
/* 0018 */ 0x9037, 0x61c1, 0x7dc1, 0x001a, 0x0000, 0x0000, 0x0000, 0x0000,
} ;

void
dumpregs()
{
int i ;
printf("... PC=0x%04X SP=0x%04X ", PC, SP) ;
for (i=0; i<8; i++)
printf("%s=0x%04X ", Rn[i], R[i]) ;
printf("O=0x%04X\n", O) ;
}

main()
{
printf("... DCPU-16 simulator.\n") ;
printf("... version 1.0, written by Mark VandeWettering\n") ;

int i ;
for (i=0; i<sizeof(prog0)/sizeof(prog0[0]); i++) {
M[i] = prog0[i] ;
}
printf("done.\n\n") ;

for (i=0; i<20; i++) {
execute() ;
}
}
```
23Dec/110

## Arduino Basic

Edsger Dijkstra, Dutch computer scientist and winner of the 1972 Turing Award wrote:

It is practically impossible to teach good programming to students that have had a prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.

While I have respect for his great contributions to the field, in my opinion, this claim falls a bit far afield. In particular, nearly everyone who learned about computers in my generation seemed to through the path of BASIC. BASIC was the lingua franca of the microcomputer revolution, and while I now see it as rather quaint and ridiculous, it provided a step into computing for many people who found that they could indeed regenerate their brain cells in spite of their early exposure.

But does BASIC have any place in the modern world? Well, I must admit, the answer might be "maybe". Little microcontrollers like the BASIC Stamp have been overshadowed a bit by the Arduino, but I still think that having an interactive interpreter that you can use to access a small computer makes some sense. Having a small language like Tiny BASIC, augmented by a few small additions to (say) allow you to generate sounds or control I/O pins makes a great deal of sense. Thus, without further justification, take a look at this project:

Arduino Basic - User's Wiki!

The code looks pretty nice and tidy, and should be easy to extend. It allows programs of up to 1.4K to be loaded into RAM, which sounds like a trivial amount, but it's certainly enough to blink some leds, read and write some serial data, and generally exercise some of the capabilities of the chip. Neat project!

2Aug/110

## Decoding a quadrature encoder just isn’t that difficult

I was shown some truly horrifying code that reported to decode a quadrature shaft encoder. It was just mind bogglingly stupifying that someone would go to that much work to write something so utterly horrible. Here's the way that I think of them.

A quadrature shaft encoder looks like a 2-bit Gray code counter. Instead of counting up in binary in the usual way, it goes up changing exactly one bit at a time. So, what are the four possibilities?

```00
01
11
10
...repeats to 00```

Let's imagine that you had a previous reading for the shaft encoder, and now you notice that some new value can be read. The new value is another two bits. That makes the total of 16 possibilities. There are four 16 bit combinations that indicate a clockwise rotation, and four that indicate a counter clockwise rotation. The rest are "impossible" states, or states where the input didn't change. You can
just write out the table.

```OLD NEW CW  CCW
00  00  0   0
00  01  1   0
00  11  0   0
00  10  0   1
01  00  0   1
01  01  0   0
01  11  1   0
01  10  0   0
11  00  0   0
11  01  0   1
11  11  0   0
11  10  1   0
10  00  1   0
10  01  0   0
10  11  0   1
10  10  0   0```

You can convert this into code any of a number of ways: switch statements, table lookup, you could minimize the logical instructions using a Karnaugh map... whatever floats your boat. I'd probably just go with a switch statement, something that looks like this (assume OLD and NEW are integers holding the two 8 bit values):

```
int
cw(int o, int n)
{
switch (o) {
case 0:
return n == 1 ;
case 1:
return n == 3 ;
case 3:
return n == 2 ;
case 2:
return n == 0 ;
}
}

```

And similarly for ccw.

It ain't rocket science.

Addendum: It's pretty easy to create the delta too (-1 if ccw, +1 if cw, else 0)

```
int
delta(int o, int n)
{
switch (o) {
case 0:
return (n == 1) - (n == 2) ;
case 1:
return (n == 3) - (n == 0) ;
case 3:
return (n == 2) - (n == 1) ;
case 2:
return (n == 0) - (n == 3) ;
}
}

```
11Jul/111

## Shouldn’t we use programming languages with fewer bad parts?

I was reading that Stanford has begun teaching their introductory computer science course CS101 with Javascript. Despite a lot of the propaganda surrounding the web and a pretty good book on the good parts of Javascript, I can help but think that Javascript has some really tragic flaws. Dijkstra famously referred to PL/1 as "the fatal disease, belonging more in the problem set than in the solution set", and I kind of feel the same way about Javascript. Yes, it is in every browser now, so it is perhaps the most widely deployed programming language you can find, but it's got quite a few absolutely horrific features in it. The `==` operator has some astoundingly obtuse behavior. Automatic semi-colon insertion is simply idiotic. Crockford documents several more features he finds questionable, most of which I think are questionable, because they mirror features in other common languages.

Crockford advises that we use tools which check our use of these questionable features, but wouldn't it simply be better if the language didn't contain these features in the first place? We know that some features are bad, so why are they retained in the language? Are we forever to be saddled with error prone and silly features?

26Apr/114

## Book Review: Land of LISP, by Conrad Barski

I learned to program as a teenager back in the 1980s, starting as most of a generation of future computer professionals did. I had an Atari 400 at home, and learned to program using the most popular language of the day: BASIC. There were lots of magazines like COMPUTE! and Creative Computing that included program listings for simple games that you could type in and experiment. The interaction was very hands on and immediate.

Sometimes I feel like we've lost this kind of "hobbyist programming". Because programming can be lucrative, we often concentrate on the saleability of our skills, rather than the fun of them. And while the computers are more powerful, they also are more complicated. Sometimes it feels like we've lost ground: that to actually be a hobbyist requires that you understand too much, and work too hard.

That's a weird way to introduce a book review, but it's that back story that compelled me to buy Conrad Barski's Land of Lisp. The book is subtitled Learn to Program in LISP, One Game at a Time!, and it's delightful. The book is chock-full of comics and cartoons, humorously illustrated and fun. But beneath the fun exterior, Barski is an avid LISP enthusiast with a mission: to convince you that using LISP will change the way you program and even the way you think about programming.

I've heard this kind of fanatical enthusiasm before. It's not hard to find evangelists for nearly every programming language, but in my experience, most of the more rare or esoteric languages simply don't seem to be able to convince you to go through the hassle of learning them. For instance, I found that none of the "real world" examples of Haskell programs in Real World Haskell were the kind of programs that I would write, or were programs that I already knew how to write in other languages, where Haskell's power was simply not evident. We probably know how to write FORTRAN programs in nearly any language you like.

But I think that Land of Lisp succeeds where other books of this sort fail. It serves as an introductory tutorial to the Common LISP language by creating a series of retro-sounding games like the text adventure game "Wizard's Adventure" and a web based game called "Dice of Doom". But these games are actually not just warmed over rehashes of the kind of games that we experimented with 30 years ago (and grew bored of 29 years ago): they demonstrate interesting and powerful techniques that pull them significantly above those primitive games. You'll learn about macros and higher-order programming. You'll make a simple webserver. You'll learn about domain-specific languages, and how they can be used to generate and parse XML and SVG.

In short, you'll do some of the things that programmers of this decade want to do.

I find the book to be humorous and fun, but it doesn't pander or treat you like an idiot. While it isn't as strong academically as The Structure and Interpretation of Computer Programs (which is also excellent, read it!) it is a lot more fun, and at least touches on many of the same topics. I am not a huge fan of Common LISP (I prefer Scheme, which I find to be easier to understand), but Common LISP is a reasonable language, and does have good implementations for a wide variety of platforms. Much of what you learn about Common LISP can be transferred to other LISP variants like Scheme or Clojure.

But really what I like about this book is just the sense of fun that it brings back to programming. Programming after all should be fun. The examples are whimsical but not trivial, and can teach you interesting things about programming.

At the risk of scaring you off, I'll provide the following link to their music video, which gives you a hint of what you are getting yourself into if you buy this book:

I'd rate this book 4/5. Worth having on your shelf!

Land of LISP, by Conrad Barski

Addendum: Reader "Angry Bob" has suggested that the language itself should be spelled not as an acronym, but as a word, "Lisp". It certainly seems rather common to do so, but is it wrong to spell it all in caps? LISP actually is derived from LISt Processor (or alternatively Lots of Insufferable Superfluous Parentheses), just as FORTRAN is derived from FORmula TRANslator, so capitalizing it does seem reasonable to me. Wikipedia lists both. I suppose I should bow to convention and use the more conventional spelling "Lisp", if for no other reason that Guy Steele's Common Lisp the Language (available at the link in HTML and PDF format, a useful reference) spells it that way. Or maybe I'll just continue my backward ways...

17Oct/103

## How a geek tells his wife he loves her: an exercise in Python programming

I'm away from my better half this weekend, visiting my Mom and brother. I scheduled this a few weeks ago, but shortly after Carmen was granted an entry into the Nike Women's Half Marathon on the same weekend. Rescheduling the visit with Mom would have been hard, so I decided to go anyway, and missed the opportunity to cheer her on.

But I'm a geek. On Saturday night, I hit upon a brilliant, romantic idea (or what passes for one when you are a geek). I knew I was gonna be busy during her run, so I decided to write a script that would text her ever half an hour with an encouraging message. That way, even if I got busy, she'd know I was thinking about her.

It probably would have helped had I started before 10:00pm on Saturday.

First of all, it's pretty straightforward to send texts to someone using email. If you send an email to phonenumber@txt.att.net, the contents will get routed to the person. And using Python's smtplib, sending email is pretty easy. Here's a function that worked for me:

```
server = smtplib.SMTP("smtp.gmail.com", 587)
server.ehlo()
server.starttls()
server.ehlo()
server.close()
```

I decided to use my gmail account, but you can use whatever SMTP mail server you like.

So, now that I had a function to txt her, how to handle the scheduling? It turns out that Python has an interesting library called `sched` which can be used to implement a scheduler. Here's the interesting bits:

```import sched
import time

scheduler = sched.scheduler(time.time, time.sleep)

def timefor(t):
tp = time.strptime(t, "%a %b %d %H:%M:%S %Z %Y")
return time.mktime(tp)

scheduler.enterabs(timefor("Sun Oct 17 04:00:00 PDT 2010"), 1, sendit, (0,))
scheduler.enterabs(timefor("Sun Oct 17 04:30:00 PDT 2010"), 1, sendit, (1,))
scheduler.enterabs(timefor("Sun Oct 17 05:00:00 PDT 2010"), 1, sendit, (2,))
scheduler.enterabs(timefor("Sun Oct 17 05:30:00 PDT 2010"), 1, sendit, (3,))
scheduler.enterabs(timefor("Sun Oct 17 06:00:00 PDT 2010"), 1, sendit, (4,))
scheduler.enterabs(timefor("Sun Oct 17 06:30:00 PDT 2010"), 1, sendit, (5,))
...
scheduler.run()
```

So, what does this do? It creates a scheduler object, which operates in real time (the time.time and time.delay functions are abstract functions that implement a time and delay function). It then uses the utility function `timefor` to figure out the absolute time when I wanted the event to occur, and when that time expires, it will call `sendit`, passing it the args in parens (which in my case is a message number in a table of nice things to send her, which I've omitted since they are too precious for words). When the `run()` method is called, the scheduler will wait until the appropriate time, and then make the calls.

I had it set to start sending messages at 4:00AM, since that is when she was going to wake up. Sadly, I screwed up the code slightly, and she didn't get her first message until 7:30 or so. But she was successfully encourage every 30 minutes after that.

I'm very proud of her and her half-marathon performance. Neither of us are natural athletes, but she's a constant inspiration to me, and makes me want to be stronger and better.

It's too bad I'm such a geek.

6Oct/101

## On random numbers…

While hacking a small program today, I encountered something that I hadn't seen in a while, so I thought I'd blog it:

My random number generator failed me!

I was implementing a little test program to generate some random terrain. The idea was pretty simple: initialize a square array to be all zero height. Set your position to be the middle of the array, then iterate by dropping a block in the current square, then moving to a randomly chosen neighbor. Keep doing this until you place as many blocks asu like (if you wander off the edge, I wrapped around to the other side), and you are mostly done (well, you can do some post processing/erosion/filtering).

When I ran my program, it generated some nice looking islands, but as I kept iterating more and more, it kept making the existing peaks higher and higher, but never wandered away from where it began, leaving many gaps with no blocks at all. This isn't supposed to happen in random walks: in the limit, the random walk should visit each square on the infinite grid (!) infinitely often (at least for grids of dimension two or less).

A moment's clear thought suggeseted what the problem was. I was picking one of the four directions to go in the least complicated way imaginable:

```dir = lrand48() & 3 ;
```

In other words, I extracted just the lowest bits of the lrand48() call, and used them as an index into the four directions. But it dawned on me that the low order bits of the lrand48() all aren't really all that random. It's not really hard to see why in retrospect: lrand48() and the like use a linear congruential generator, and they have notoriously bad performance in their low bits. Had I used the higher bits, I probably would never have noticed, but instead I just shifted to using the Mersenne Twister code that I had lying around. And, it works much better, the blocks nicely stack up over the entire 5122 array, into pleasing piles.

Here's one of my new test scenes:

Much better.

22Sep/100

## Direct use of the PostScript language

As I have mentioned before, I sometimes find it convenient to write raw PostScript. I've used it to generate business cards, to make templates for laying out parts for radios and telescopees, and generating score cards and labels. Today, I had an idea for creating a large poster to hang in my office. It turns out that Costco will print 20"x30" posters for \$8.99, which is actually very cheap (almost as cheap as simple throw-away posters). So, I found myself once again coding things up in raw postscript, and generating 300dpi output from that using GhostScript. But unlike previous projects, I also wanted to include a couple of paragraphs of formatted text. I remember reading about The Tinydict system promoted by the Capella Archive. He writes whole books in raw PostScript, which isn't actually as painful as it sounds. Digging around though, I found that there are a couple of resources thoughtfully archived on anastigmatix.net:

27Jul/1015

## 10,000 Monkeys Typing…with a Unix/sh challenge…

I was testing some code that I wrote for analyzing cryptograms, and decided that the easiest way to do so would be to get some random text, drawn from the letters A-Z. A moments thought yielded this method, without even programming anything:

```tr -d -c A-Z < /dev/urandom | dd ibs=10000 count=1
```

The tr generates the required data, and the dd truncates it to the desired number of characters. But for tidiness, I'd like to have the output broken up so that each line consists of 50 characters, with spaces inserted between every 5 characters (I won't begrudge you if you leave a dangling space at the end of each line). I couldn't figure out a simple way to get that to happen all in one command line and using standard utilities. I can of course write a little Python utility, or even perl, but can anyone think of a clever short way to do this?

Addendum: Tom pointed out something interesting about the command that I listed above: it doesn't work the way I think it does. Apparently the ibs is the input block size, which dd dutifully allocates, and the count is the number of read system calls that the system issues. For reasons which escape me, it does not try to make sure that it actually received a full input block: it will nicely return short blocks if it finds them, and doesn't bother retrying to get more. Hence, it works rather erratically when using a pipe as input, particularly when the writes from the upstream process may flush in odd intervals.

21Apr/101

## Q: Are there any good, open user interface options?

Yesterday I was in our Atrium, and Craig had his iPad with him. I got into a discussion with him and Loren about why I thought the device was very cool. (I also told them why I had been actively discouraged from becoming an iPhone developer earlier, but that's a story for a different time.) But I brought up a point which I've been kind of mulling over ever since: the iPad (and iPhone) may be the first not-completely-user-hostile-interface ever delivered in a consumer electronic device.

Consider your TV set and its remote as a counter example. On virtually every television ever made, there are two ways to select what channel you wish to watch: by going up and down, or by entering a channel number. There is no more reason to select channels this way than there would be for you to access your online photos by number, or by paging through them one at a time. And yet, for decades, no TV manufacturer has sought to upgrade this very simple, basic interaction that you have with a television. Is there really any reason at all for you to use channel numbers for anything? They add ten buttons to your television remote. Wouldn't it be nice to remove that space, and use it to make the buttons you actually do use larger, and differentiated so you could actually use them in the dark? And how about those buttons to scroll up and down lists? Wouldn't you just like to point at what you want? The Nintendo Wii uses a nice IR sensor system to select items: imagine of that technology were merged with your remote, along with accelerometers and the like. If Nintendo can profitably make these as options in a \$250 game system, you'd think that that kind of technology could be used in other consumer electronics.

And don't even get me started about devices like Blu Ray players. Why do I need a different remote? And a completely different system of conventions and menu selections. Bleh.

Okay, I'm getting a bit astray. The iPad has already demoed some really, really nifty applications with interesting, intuitive interfaces. But while I have defended the iPhone/iPad as a consumer device, I am not really all the enthusiastic about doing my own programming and experimentation with such a device. So, I thought I might throw out this question: imagine that you were going to prototype improved interfaces for media devices. Are there any open source options that are worth considering, or are they all terrible? I'm considering a platform like the Acer Revo, hooked to a large screen TV using HDMI, and possibly some wireless bluetooth devices (or maybe just the Wii remote). Anyone have any experience/success with this kind of interactive UI programming?

3Aug/090

## Passing the Amateur Extra test by guessing…

The Amateur Extra test is 50 questions, multiple choice, with 4 answers per question. A passing grade is 35 or more. A few minutes of programming this morning, even before I had any coffee yielded that the exact probability of passing was:

```      4677523340461106447
------------------------------
158456325028528675187087900672
```

or about 1 in 33.9 billion.

This wasn't that interesting of a question, but to solve it, I hacked up a quick but limited implementation of rational arithmetic in Python. I was wondering if there was a better way to implement this in Python so overloading would "just work". I didn't know how, and the problem was simple enough, so I didn't try. Here's my solution.

```#!/usr/bin/env python

def gcd(a, b):
if (a < b):
a, b = b, a
while b != 0:
a, b = b, a%b
return a

class Rational:
def __init__(self, a, b):
self.a = a
self.b = b
def __str__(self):
return "[%d / %d]" % (self.a, self.b)
def pow(self, p):
return Rational(pow(self.a, p), pow(self.b, p))
def mult(self, r):
tmpa = self.a * r.a ;
tmpb = self.b * r.b ;
d = gcd(tmpa, tmpb)
return Rational(tmpa//d, tmpb//d)
def imult(self, i):
tmpa = self.a * i ;
tmpb = self.b ;
d = gcd(tmpa, tmpb)
return Rational(tmpa//d, tmpb//d)
tmpa = self.a * r.b + r.a * self.b
tmpb = self.b * r.b
d = gcd(tmpa, tmpb)
return Rational(tmpa//d, tmpb//d)

p = Rational(1, 4)
q = Rational(3, 4)

def fact(n):
c = 1
while n > 1:
c = c * n
n = n - 1
return c

def comb(a, b):
return fact(a)/(fact(b)*fact(a-b))

total = Rational(0, 1)

for t in range(35, 51):
x = p.pow(t).mult(q.pow(50-t)).imult(comb(50, t))

print "Exact probability is",  total
print "Only about 1 in", total.b // total.a, "will pass"
```
10Jun/090

## The Next 700 Programming Languages

I learned that computer scientist Peter Landin passed away recently. Landin's research helped refine the direction of my college studies, and was always a great pleasure to read. His derivation and explanation of the SECD machine served as the basis for a more mature and clear understanding of many aspects of programming languages and their evaluation. And, of course, there was his classic paper "The Next 700 Programming Languages", which provides an interesting perspective on programming languages, where they should were, where they are going, and what is significant about their design. Great stuff!

The Next 700 Programming Languages

Addendum: Here's my attempt to embed a link to this paper here.

15Oct/070