Why are tiny systems so big?

August 31, 2014 | Emulation, My Projects | By: Mark VandeWettering

The last five or so years has been a remarkable period in computing. About five years ago, I began to fear that computing would be increasingly pre-packaged: that laptops and tablets would totally take over the market and the ability to find computers which were well suited for programming and experimentation would more and more difficult.

But something remarkable happened that I didn’t see coming: the opposite occurred. The market for small development boards and computers exploded. At many different performance levels, and very inexpensive price points, computers for experimentation flourished and people began programming in a way that they didn’t before. We see the rise of platforms like the Arduino, the Raspberry Pi, and the Beaglebone Black at super-inexpensive price points. It is truly an awesome time for computer experimentation.

But aesthetically there is something that jars me a bit: that these small, simple systems aren’t really that small or simple. Consider the Arduino Uno: it is a small 8 bit computer with only 32K of flash memory. But the development environment for the Arduino isn’t self-hosted: you need a separate cross compiling host, and the software is dozens of megabytes in size. In the 1980s, we had systems of comparable overall power (based upon processors like the 6502 or Z80) but these machines typically self-hosted interpreters (most commonly for BASIC) that allowed development to proceed without an additional cross-compiling development system. While these systems lacked some of the power of modern development environments, they also were simpler and easier to master.

Systems like the Raspberry Pi are at least self-hosted. I really like that feature: you load up an SD card with a system like Raspbian or Ubuntu, and you have a complete Unix system. But I can’t help but wonder if this is a bit too daunting for the hobbyist without three decades of Unix experience.

I guess what I think is interesting is providing a smaller, closer to the “bare-metal” environment for embedded programming that can be self-hosted: that can run on the target hardware, with only the thinnest layers of operating system.

Okay, so that’s the idea. What options are there?

One of the most interesting things I’ve begun looking at is Fabrice Bellard’s TCC compiler. Having a C compiler built into your embedded machine may seem a bit odd, but Bellard’s compiler is relatively tiny and can generate code for either Intel or ARM. Experimenting with a few of my own programs shows it to be remarkably capable: it compiled both my toy raytracer and the dump1090 software defined radio program. The resulting code is obviously not super efficient: my raytracer runs about 1/2 speed relative to the code compiled with gcc. But it does work, and the compiler is fast and small enough to self host. Pretty interesting.

What kind of target hardware should we target? It seems like we can get a lot of leverage by targeting ARM based boards, and adopting popular, easily available platforms would make it easier for people to get started. In most respects, it’s hard not to pick the Raspberry Pi: it’s popular, it’s available, and a fair amount of information about “bare metal” programming it seems to be available. It also seems that we can use emulators like QEMU to help bootstrap and debug.

Do we need an operating system? If so, how much of one? It’s kind of an open question. I’d like to see something whose size is maybe a few thousand lines of code. Minix? Xinu? A simple real time OS/executive maybe?