Happy Birthday Charles Darwin!

Charles Darwin
On February 12, 1809, Charles Darwin was born. I think a strong argument could be made that Darwin is the most influential scientist of all time. He postulated that the complex biosphere we
observe is the result of understandable physical processes that we can study and observe at work throughout the long history of life on our planet. His contributions formed the basis for all of biology, and his keen insights provided the basis for understanding the nature of life around us.

“There is grandeur in this view of life, with its several powers, having been originally breathed into a few forms or into one; and that, whilst this planet has gone cycling on according to the fixed law of gravity, from so simple a beginning endless forms most beautiful and most wonderful have been, and are being, evolved.”

Origin of Species, 1859

Oh, and some American president
was born on the same day, but he really is of minor importance compared to Darwin.

It’s important to have an opinion…

I’ve recently come to the conclusion that most people believe that it is more important to have an immediate and distinct opinion rather than an actual informed opinion. This was brought home to me in the aftermath of the Columbia disaster, when various newsgroups like
sci.space.shuttle were flooded with premature speculations and cries for the heads of NASA management, long before any sober consideration of the facts was possible.

Of course if you are looking for this kind of thing, there is no better place than Slashdot. Case in point, the recent article Slashdot | Pixar Eclipses Sun with Linux/Intel
The resulting flood of messages runs a wide gamut, but many fo the articles which were moderated up held little information, or even actual misinformation.

SuperDug comments:

I was under the impression…

That Sun had tried renderman (or whatever they call it) to run on 32 bit processors and it was a horrible disaster. Something about how it seemed more feasible and cost efficient to use Sun until the days in which the competiting 64 bit processors became cheaper.

I could have sworn that the software couldn’t run at all in 64 bit. I’m just wondering if they didn’t take a step down when they converted 64-bit optimized code to run on regular high cache 32-bit pentiums.

First of all, Sun doesn’t own or develop RenderMan. Pixar Animation Studios does. Until a few
years back, Sun platforms made up the bulk of RenderMan sales. The availability of cheap Intel
boxes and the maturing of Linux as an inexpensive operating system have made it a choice which
a large number of RenderMan customers now use.

RenderMan is a very portable system, the history of which goes back at least 15 years. It has been ported to 64 bit machines, including the DEC Alpha and the Sparc.

TheRaven64 opines:

I’m actually a little surprised they use general purpose CPUs for this kind of task. I’d have thought that a load of custom DSPs might be faster, and probably cheaper – How about 1 DSP per pixel (About 10 million?). I’m sure that would really zip along, if they could sort out the memory access issues inherent in this kind of application. Ray tracing is perfect for parallel execution, since each pixel really is independent of each other pixel, and each frame is likewise independent.

This is a particularly silly opinion because it sounds plausible unless you actually know anything about the scale and scope of what actually producing a movie entails. Pixar used to manufacture special purpose computer graphics hardware. They lost money and they stopped. In its place they created a new portable rendering system, and then have ridden 15 years of Moore’s law to make it 1000 times faster, merely by expending the time of a mid-range software guy to spend a few weeks porting to each new architecture. If the commentor could describe how Pixar could construct
custom hardware with releases every 18 months that double in speed cheaper than that, then I’d like to hear from him.

As for the old parallel raytracing comment, people who make this comment have never tried to write rendering software, particularly software that can handle the large scenes which are typical of movie production. In principle such an application is trivially parallelizable, but the reality has lots of details which are hard to manage.

nntp//rss

I like to peruse sweetcode, a very nice website that contains pointers to interesting but perhaps not very well known software projects. During a recent browse, I noticed a link to a project called nntp//rss. Interestingly enough, it provided a gateway that allowed you to access RSS feeds via NNTP from your favorite newsreader. For a lark, I decided to give it a download.

On this website, I use Movable Type which allows the
publication of RSS feeds, and also has a plugin that allows you to merge other RSS feeds to your homepage. I use it to merge the feeds I use most (Slashdot, freshmeat, CNN, and Reuters Science) on my homepage, which keeps from having to use a different portal site. I’ve been toying with
expanding my knowledge and use of RSS feeds, as they seem an interesting way to combine information from many websites.

Originally, I didn’t think that nntp//rss would be all that useful. After all, I nearly always have a web browser open, and therefore I could always get my webfeeds by merely accessing this site. But I found out that I was wrong.

nntp//rss is nice because it capitalizes on one fact I hadn’t considered: I have my mailbox (via Netscape or Mozilla) open even more often than I have a web browser up. I can now
see if there are any new postings on slashdot or freshmeat by merely glancing at the overview.
I find that to be rather cool!

I exchanged a brief thank you to the author, Jason Brome, and made a couple of suggestions which seemed obvious: the ability to actually post to the weblog via the normal NNTP interface. This could be done by a XMLRPC<->NNTP gateway using the Blogger API. This has at least one
major advantage over the normal way of posting: I could use my normal nntp client (slrn, which
uses my normal Unix editor) to enter new messages, and not have to struggle with trying to
type and edit inside of HTML text boxes.

Anyway, you can see what this looks like by using https://brainwagon.org as an NNTP server. You’ll only get a couple of websites, but it will show what the idea is.

Nifty stuff!

Man vs. Machine

Kasparov fights to a draw in a six game match against Deep Junior. I’m not a very good chess player myself, but I spent a fair amount of time as an undergraduate studying heuristic search, so I am always interested when matches such as this occurs.

Whenever such matches occur, there is a flood of activity on newsgroups and weblogs. Invariably these fall into a couple of simple categories:

  • It’s not really AI, the chess program just used brute force. I once heard that AI research was just research into writing programs for which we don’t as yet have good solutions: once we have a solution, it’s just software engineering. I find this comment kind of interesting though, because you don’t see people arguing about cars being able to go faster than humans can run being somehow “unfair”. Humans seem awfully testy when confronted with the idea that something could be smarter than them, although we’ve grown accustomed to the idea of stuff being stronger or faster than we are.

  • Chess programs aren’t interesting. It’s all a solved problem.
    I recently became interested in computer chess about a decade of lassitude, and was shocked to find that a number of rather interesting improvements in chess implementation have occurred. They are very interesting programs that require a great deal of finesse and skill to create.


  • Chess is simply too easy. They’ll never beat humans at go!

    Go is usually used as the last stand for humans because

    • There are humans who are very good at it.
    • The large branching factor makes minimax and variations intractable.

    I think that this could be mistaken on a couple of levels. Perhaps even the top humans aren’t very good at go. It may be that they are only better than other humans. It seems odd to me to suggest that adding a machine capable of flawlessly examining millions of board combinations per second could in no way increase the quality of play of go players, perhaps at all levels.

Anyway, just some rambling thoughts. Some links:
the Computer Go Ladder
, GNU go, and Computer Chess Programming references.

Behind the Curtain of Java

INTERNALMEMOS.COM has a nifty article written by a colleciton of Sun Software engineers about the inappropriateness of Java on the Solaris platform. I think it illustrates how businesses fail to succeed because of their inability to prioritize properly on reliable rather than glitzy software. I rather like the Java language and would
almost certainly choose it for new projects if the implementations were up to par. I vow only to use languages which provide good implementations on a wide number of platforms, and by all accounts, Java isn’t quite their yet.

Strange Space Objects

streak2.jpg
streak2.jpg

My friend Phil sent me these pictures he took using a Canon D60 through a small wide angle telescope. He sent me these pictures to ask what the streaks are. My best guess is a geosynchronous satellite. To verify this, I was thinking of first of all plotting the direction of the trail. Assuming they are in order, the satellite should track almost exactly east west. A little head scratching math should indicate its orbital velocity (assuming a circular orbit).

Anyway, from his letter to me…

I was shooting some wide-field photos of Orion’s belt
about 7:45 tonight and four or five of them show this
strange streak. It seems like it’s moving awfully slow
for a satellite but I can’t imagine what else it could
be.

Please see the two attached full rez crops from the
(much larger) original jpegs taken with a Canon D60,
through my Celestron Rich Field 80 piggy-backed on my C8.
These are totally unenhanced at the full pixel rez of
the full frames. As I said, the duration of the streaks
is 30 sec.

Satellite? Asteroid? Whaddaya think?

Shuttle Columbia Lost over Central Texas

flag_half_mast.gif

I woke up around 8:30 Pacific Time today, and flipped on the television to watch my usual lineup of Looney Tunes cartoons to start my Saturday, but flipped through a news report that began with the words “unable to survive an accident at that altitude” while displaying a picture of the Space Shuttle. I had a deja vu feeling, the same feeling I had when being awoken to the news of the Challenger explosion 17 years ago.

The limited facts that seem to be known for sure is that Columbia broke up at an altitude of 200K feet while travelling at over 12,000 miles per hour over central Texas.

Aboard were Commander Rick D. Husband, Pilot William C. McCool, Payload Commander Michael P. Anderson, Mission Specialists David M. Brown, Kalpana Chawla, Laurel Clark and Payload Specialist Ilan Ramon, the first Israeli astronaut. My condolences and the condolences of all the world go out to the brave crew and their families.

Noah’s Ark Found!

I am fascinated by pseudoscience, creationism and all sorts of other leger de brain that people engage in. Occasionally I end run accross websites that don’t really deserve Quote of the Day status, so I’ve created a new Quack of the Day to take its place.

I was inspired today by Anchor Stone International. There are lots of sites which claim to have evidence that Noah’s Ark has been found, but few with as many mind boggling claims as this one.

From their own Ark of the Covenant FAQ, we find such gems as:


Now that the Ark has been found, how is it being protected? Is there any danger of it falling into the wrong hands?

The Ark has been in its present location for about 2600 years and has been perfectly safe for that time. There is no reason to believe that it is in any danger now.

From time to time we get reports such as the following: ?the area (where the ark is located) is surrounded by a high fence and is being guarded by military troops.? This is absolutely not true. The Ark is located in a cave just outside the north wall of the old city of Jerusalem. It is protected in the same way God has always protected it …………. by His angels. There is no need for anything beyond this.

Yeah. Right. They go on to say that the Ark will be revealed to the world (like putting up a website saying it has been found isn’t revealing it to the world?) when Ron Wyatt, the head of this endeavor sees the right signs.

Unfortunately, Ron joined the choir invisible in 1999, so I guess we are stuck.

It’s quite a piece of work. But they do run tour groups!

Ark of the Covenent Found!

I am fascinated by pseudoscience, creationism and all sorts of other leger de brain that people engage in. Occasionally I end run accross websites that don’t really deserve Quote of the Day status, so I’ve created a new Quack of the Day to take its place.

I was inspired today by Anchor Stone International. There are lots of sites which claim to have evidence that Noah’s Ark has been found, but few with as many mind boggling claims as this one.

From their own Ark of the Covenant FAQ, we find such gems as:


Now that the Ark has been found, how is it being protected? Is there any danger of it falling into the wrong hands?

The Ark has been in its present location for about 2600 years and has been perfectly safe for that time. There is no reason to believe that it is in any danger now.

From time to time we get reports such as the following: ?the area (where the ark is located) is surrounded by a high fence and is being guarded by military troops.? This is absolutely not true. The Ark is located in a cave just outside the north wall of the old city of Jerusalem. It is protected in the same way God has always protected it …………. by His angels. There is no need for anything beyond this.

Yeah. Right. They go on to say that the Ark will be revealed to the world (like putting up a website saying it has been found isn’t revealing it to the world?) when Ron Wyatt, the head of this endeavor sees the right signs.

Unfortunately, Ron joined the choir invisible in 1999, so I guess we are stuck.

It’s quite a piece of work. But they do run tour groups!

Motherboard Monitoring

Modern motherboards kick ass. They have all sorts of temperature sensors that can tell you what the current temperatures, fans and voltages are. I’ve had a simple monitoring gadget in the FreeBSD ports collection called healthd installed for a while, and finally got down to making
a tiny PHP program that inserted the relevant info into this hope page. Check out
Server Info on the right.

Eventually I’ll have to get it to keep track of the temps via graphs.

Robot Sumo

sumo.jpg
I stopped by Barnes & Noble on the way home the other day, and was bemused by a couple of books, including this one on the construction of robots to play sumo. Since I am considering a small robotics project, I thought this book might be good, and after a brief reading, it appears to be better than most, striking a nice chord between entirely theoretical and entirely practical. It has a
good section on the use of remote control gear to control robots, something which several other robotics books that I have sort of ignore, and as I am not
an RC enthusiast, something which I need to learn a bit more about. Included are some nice projects using the Basic Stamp, good coverage on IR and ultrasonic proximity sensors, and plans for a mini-Sumo robot. It also has nice pictures and a darn cute cover. I’m sure if you are a genius robot engineer,
you’ll learn nothing, but as I haven’t built one myself, I give it two thumbs up.

Cryptography Potpourri

I’ve also maintained a bit of an amateur interest in cryptography. While I understand a bit about modern ciphers such as DES, IDEAL and RC4, I find it more fun to play with older cryptosystems. When Simon Singh published his book The Code Book, I decided to work through the Cipher Challenge at the back. While I didn’t win the $10,000 prize, I did manage to crack 7 out of 10 ciphers, including the Playfair, ADGVX cipher and the German Enigma machine (which took the most work and was the most fun). I still am fascinated by old crypto machines. My friend Jeff actually owns an M209 field cipher machine, which I dug up a simulator for out of the old Version 6 Unix distribution.

Anyway, while scanning sci.crypt, I ran accross this interesting link to a paper simulation of the 3 rotor German Enigma machine. If
I had this while I was debugging my simulator, it probably would have shaved several weeks off my efforts. Much thanks to Michael Koss, who is a collector
of crypto machines, and to John Malley for putting some of the photos of his collection up on the net. I’m completely jealous.

Mantaining a Robotic Sense of Balance…

I’ve always been interested in robotics (particularly of the amateur variety) and
in the past few days I’ve discovered some excellent links. Slashdot ran an
article recently that highlighted the Legway, a lego version
of the Segway, built using the RCX controller from a Lego Mindstorms kit. It’s an
awesome achievement, and very, very spiffy. David Anderson has an incredible home built robot called nBot, which is a
self balancing two wheeled robot. His page is great, with links to a lot of great pictures, video and details about implementation. He also has a nice machine shot where he manufactures these cool robot parts. Truly inspirational.
Larry Barello has a nice page describing his Gyrobot which has a similar control mechanism. Very nice indeed. This page links to a number of MPEG movies of JOE, a self balancing radio controlled robot. They also have a nice paper describing the
implementation
and I’m told it’s a subset of their thesis work, which is unfortunately in Italian.

Cool, cool stuff.

Oops, did the decision of Eldred v. Ashcroft open a new legal challenge to the DMCA?

Jack Balkin, a scholar on First Amendment issues presents some really interesting criticism of the majority opinion in Eldred v. Ashcroft that may frame new challenges to the DMCA. Ginsberg asserted in the majority opinion that as long as the traditional boundaries of copyright (such as fair use) are unchanged, Congress is free to expend the term. I find the reasoning itself rather odd, since it seems to me that extending the term is changing the boundaries, but Balkin points out that the DMCA rather clearly operates to change those boundaries by restricting fair use.

Worth a read.