You can’t learn some things from the Internet…

September 29, 2014 | Rants and Raves | By: Mark VandeWettering

The Internet is awesome. For instance, this morning I found a link to this amazing Nixie Tube clock, which uses neon bulbs (no transistors or ICs) as the logical elements (clocks are really just fancy counters). I think I might have seen that neon bulbs could be used as logical elements (Alan Yates? Did you tell me about this) but it had slipped my mind.



Very, very cool. Seems to me this kind of logic could have been implemented in the Victorian era. If you need some bonus links, you can try this link, which includes circuit diagrams or Ronald’s page on ring counters, or even a link to Dance’s Electronic Counting Circuits. Clearly, the internet can be very useful.

But it can be really, really annoying too.

In the week, Apple released a couple of new iPhones (the iPhone 6 and the iPhone 6 Plus). The early reports were that they were astoundingly good sellers: Apple sold 10 million units over the launch weekend.

There were two things that marred this performance. The first was the 8.0.1 update. It turns out that the first update that Apple pushed out to consumers was terrible, and disabled access to the cell network on phones. This update was only available for a few hours before Apple pulled it, but it made people grumpy. The 8.0.2 update followed shortly, which seems better, but there is a lot of general grumbling (some of it from my missus) about stability, particularly with Facebook. As irritating as this can be, there seems to be little doubt that some future updates will get things sorted out, and all will be well.

The bigger issue was bendgate. Some people reported that they had put an iPhone 6 in their back pocket, and later discovered it was bent. As a consumer, I’m pretty shocked. One thing that I like about Apple is make fairly robust phones. My 2 year old iPhone 5 hasn’t got a scratch on it. I’ve had numerous iPhones, and never had any complaints about physical robustness. But hey, the videos don’t lie…

Or do they?

The problem is that it’s actually impossible for us to figure out how significant the problem is by reading stuff on the Internet. Take for instance this “informative” video:



That looks pretty bad, right? Lots of people apparently claimed it was rigged, so he followed up with this:



Shocking!

But here’s the thing. On release, you’ll find people doing all sorts of things to iPhones, like hurl them at the ground and put them in blenders. The reason they do so is that they can make money by doing this kind of thing. If you can get 45 million hits (as Unbox Therapy did for their first video) you can easily buy all the phones you want and torture them in different ways. And, because the “bend” issue was already the wind, with large communities of individuals irrationally supporting Apple or its opposition, YouTubers are strongly incentivized to toss more fuel on the flame. After all, if youo had a video called “Apple iPhone seems sturdy, hard to bend.” I doubt it will get a lot of clicks.

Of course Apple has responded to say that only 9 bent phones have been returned, and had a bunch of the press tour their “we torture test our iPhone” labs. This too, actually doesn’t help. Apple is notorious for carefully crafting relationships with the press, and it seems unlikely that they would have invited the truly cynical to view the process and ask questions. Indeed, the press in this case seemed to be lobbing softballs at Apple, and didn’t actually ask any interesting questions.

The best information released to day is probably from Consumer Reports. They seemed do a pretty good scientific test. But I must admit: I’m not super happy with this testing either. Their three point testing was centered on the back of the iPhone, when it seems pretty clear from most of the other images of “bent” iPhones that there seems to be some weakness up near the buttons toward the top. It seems to me that changing where force is applied might change the results you get.

In the end, it seems virtually impossible to figure out much useful about the bendability of iPhones. I went and handled one at the Apple Store. It seemed reasonably sturdy. I’m not going to make money by mangling a phone, so I didn’t try. By the time my upgrade cycle comes around next year, there will likely be some additional data when the furor has died down (and IOS 8 will probably be stable). But it seems unfortunate to me that with all the ability to exchange information, I’ve only increased my anxiety over this purchase, not decreased it.

It would seem the Internet is only good at finding information that nobody cares about.

Share Button

Schrödinger’s Cat

September 10, 2014 | Raspberry Pi | By: Mark VandeWettering

I’ve long suspected that my cat Scrappy had teleportation powers:

Okay, okay, I know he doesn’t really. But it was kind of funny.

Share Button

Why are tiny systems so big?

August 31, 2014 | Emulation, My Projects | By: Mark VandeWettering

The last five or so years has been a remarkable period in computing. About five years ago, I began to fear that computing would be increasingly pre-packaged: that laptops and tablets would totally take over the market and the ability to find computers which were well suited for programming and experimentation would more and more difficult.

But something remarkable happened that I didn’t see coming: the opposite occurred. The market for small development boards and computers exploded. At many different performance levels, and very inexpensive price points, computers for experimentation flourished and people began programming in a way that they didn’t before. We see the rise of platforms like the Arduino, the Raspberry Pi, and the Beaglebone Black at super-inexpensive price points. It is truly an awesome time for computer experimentation.

But aesthetically there is something that jars me a bit: that these small, simple systems aren’t really that small or simple. Consider the Arduino Uno: it is a small 8 bit computer with only 32K of flash memory. But the development environment for the Arduino isn’t self-hosted: you need a separate cross compiling host, and the software is dozens of megabytes in size. In the 1980s, we had systems of comparable overall power (based upon processors like the 6502 or Z80) but these machines typically self-hosted interpreters (most commonly for BASIC) that allowed development to proceed without an additional cross-compiling development system. While these systems lacked some of the power of modern development environments, they also were simpler and easier to master.

Systems like the Raspberry Pi are at least self-hosted. I really like that feature: you load up an SD card with a system like Raspbian or Ubuntu, and you have a complete Unix system. But I can’t help but wonder if this is a bit too daunting for the hobbyist without three decades of Unix experience.

I guess what I think is interesting is providing a smaller, closer to the “bare-metal” environment for embedded programming that can be self-hosted: that can run on the target hardware, with only the thinnest layers of operating system.

Okay, so that’s the idea. What options are there?

One of the most interesting things I’ve begun looking at is Fabrice Bellard’s TCC compiler. Having a C compiler built into your embedded machine may seem a bit odd, but Bellard’s compiler is relatively tiny and can generate code for either Intel or ARM. Experimenting with a few of my own programs shows it to be remarkably capable: it compiled both my toy raytracer and the dump1090 software defined radio program. The resulting code is obviously not super efficient: my raytracer runs about 1/2 speed relative to the code compiled with gcc. But it does work, and the compiler is fast and small enough to self host. Pretty interesting.

What kind of target hardware should we target? It seems like we can get a lot of leverage by targeting ARM based boards, and adopting popular, easily available platforms would make it easier for people to get started. In most respects, it’s hard not to pick the Raspberry Pi: it’s popular, it’s available, and a fair amount of information about “bare metal” programming it seems to be available. It also seems that we can use emulators like QEMU to help bootstrap and debug.

Do we need an operating system? If so, how much of one? It’s kind of an open question. I’d like to see something whose size is maybe a few thousand lines of code. Minix? Xinu? A simple real time OS/executive maybe?

Share Button

Milhouse still doesn’t know the first thing about FIrst Position…

August 25, 2014 | Checkers | By: Mark VandeWettering

White to move and win, position analyzed by Payne in 1756 in the first English book on checkers...

White to move and win, position analyzed by Payne in 1756 in the first English book on checkers…

Until Milhouse can play this position out, it really can’t be considered a real checkers program. Right now, even with an endgame database, it’s still clueless about how to proceed.

Addendum: Reinfeld’s How to Win at Checkers gives some details on winning positions like this. I recall reading it (my puzzle database includes puzzles from this book) but I’m unsure as to how to effectively merge the knowledge into my evaluation function. Still, worth looking at.

Share Button

Are improvements in computer chess due mostly to hardware or software?

August 22, 2014 | Checkers, Computer Chess, Computer Games | By: Mark VandeWettering

file000183685005My recent revival in interest in computer chess/checkers/gameplaying was in part spawned by the impression (not particularly support by evidence at the time) that the dramatic increase in computer chess strength must have come from more than just basic hardware improvements. It seemed obvious to me that some fraction of the increase in the play was due to new cleverness and discoveries by chess programmers. But I had no real datapoints to figure out what that split might have been.

Until I found this 2010 post by Bob Hyatt. Bob has been an expert programmer in computer chess for decades, first for Cray Blitz and later for Crafty, an awesome open source Chess program. It’s source code is amazingly interesting, and has tons of features which are pretty cool.

In the forum post, Bob compared Crafty 23.4 (which was at the time the highest ranked version he had produced) with Crafty 10.18, the version which was available 15 years earlier from 1995. Running on the same hardware, you find that Crafty 23.4 was 360 ELO points higher than older version.

But how much would we expect to gain from the factor of 1000x increase in speed between 1995 and 2006? I remember reading that each doubling of CPU speed would be worth about 100 ELO points. Bob did some experiments that suggests for Crafty, that result might be something more like 80 ELO points. That means that from hardware improvements alone, you might expect to see an increase of 800 ELO points.

This would seem to imply that only 1/3 of the improvement of Crafty was due to software, with the remaining 2/3 of the improvement due to increases in hardware speed. Bob doesn’t believe the 360/800 numbers are accurate (they are likely too broad, 1995 Crafty was probably not 1100 points weaker than 2010 Crafty) but that the ratio of the two is likely to stand up.

Bob did this post to respond to this rather long thread which I find much harder to draw any real conclusions from. But it’s still good reading.

But it would seem that my intuition is likely wrong: that at least for Crafty, about 2/3 of its gains are likely to increases in hardware speed, with only 1/3 coming from software improvements. Interesting.

Share Button

Pondering computer chess…

August 20, 2014 | Computer Chess | By: Mark VandeWettering

At the risk of name dropping, on my flight out to Vancouver for SIGGRAPH last week, I had the exceedingly high luck of scoring a seat next to Pat Hanrahan. 25 years ago, I was working at Princeton in the Applied Math department, and the single smartest thing I did was make Pat’s acquaintance. Besides providing countless insights into computer graphics over lunch and chats, he helped me score my current job, where I’ve been gainfully employed for the last 23 years. He claims that he occasionally even reads my blog, so if you are reading this Pat, thanks a million!

During the two hour flight, our chat ranged over a wide variety of topics. One topic that is completely unrelated to my work is my interest in computer chess and checkers, mainly as applications of game tree search. Even as an undergraduate, I was fascinated by this topic, but when Deep Blue beat Kasparov 2-1 in a six game series in 1997, I kind of pushed this to the back of my mind. I mean, I thought it was over.

I’ve made this mistake before.

What’s amazing is that computer chess programs have gotten better recently. And not just a little better, a lot better. One particularly interesting chess program is Stockfish. Firstly, it is an open source project, which means that its innards are available for your inspection. Secondly, it is available on a wide variety of platforms, including as a free application on the iPhone. I interrupt this diatribe to show the game I played against Stockfish on my return flight. I don’t play very often, but managed to eke out a shaky draw against Stockfish with it taking 10 seconds per move. I only lost concentration once and stumbled into an obvious blunder (which I shamelessly took back and went at for another try). Here’s the game, using a spiffy WordPress plugin.

Anyway, the third thing that I thought was cool about Stockfish was that stockfish is really good. It’s clear that it would crush all human players in a match: it’s ranked about 400 points higher than Magnus Carlsen, which means that Stockfish 5 would be expected to score about 90% against Carlsen. I didn’t think that this increase in the state of the art could have been done purely as the result of CPU speed improvements, so I wanted to look into it a bit and see what might have helped Stockfish get so good.

Interestingly, I think one of the greatest causes is from exhaustive testing. The Stockfish project has implemented a distributed testing facility called Fishhtest. The idea is pretty simple: volunteers contribute cpu time to exhaustive test commits to the source tree to see the effect on gameplay You can read more about it here.. According to the Wikipedia article on Stockfish, this allowed Stockfish to gain 120 ELO points in just 12 months.

Anyway, my chats with Pat and pondering some of the ideas from Stockfish make me want to dust off my Milhouse checkers program, and see if I can’t borrow some ideas from Stockfish as well as other ideas from Pat (implementing checkers on an FPGA?). We’ll see what happens.

Share Button

My Atari 2600 Pong Clock

August 6, 2014 | Atari 2600 | By: Mark VandeWettering

While looking for something completely different, I ran across the code and binary images for my old Atari 2600 “Pong Clock”. I realized that my previous post on the matter didn’t have pictures of my final version, so just for fun, here are a couple of Stella screengrabs (in NTSC “TV” mode, for enhanced realism).

I included a tiny intro screen. I showed this at a 2010 get together. The Conway glider at the top is animated:

simple.bin_6

It plays a game of pong against itself, with the score representing the current time. You can set the time using the left joystick. When the minutes tick change, the right player wins. When the hours change, the left player does.

simple.bin_7

It also works on black and white tvs. I never did the changes necessary to make it play in PAL mode, although it should be pretty straightforward. The Atari 2600 was practically built for implementing Pong.

Still, it’s got some finesse in it. I never could have done it without all the hints from the Atari Age forums and the Stella 2600 emulator. My source code references a nice little bit of code from an Atari Age tutorial series, which I shamelessly purloined. I left a comment saying:

;;; According to the documentation, A isn’t really the position (0-160),
;;; you have to add +7 to the position. But I find that the offset in
;;; stella is +5. I haven’t done the cycle counting to figure it out,
;;; but I’ve had good luck trusting stella, so that’s what I’m going
;;; with.

Perhaps I’ll revisit that sometime and figure out what was right.

Share Button

Two more “primitive” cameras…

August 5, 2014 | Photography | By: Mark VandeWettering

My previous experiments with a foam core 4×5 camera has whetted my appetite for more camera experiments. In particular, I was looking for cameras that could be built quickly, and where amateurs could construct their own lenses out of surplus optics. I am particularly interested in cameras that use the old fashioned meniscus landscape lens design, which takes just a single meniscus lens, and symmetric lens designs like the Steinheil Periskop. Most DIY camera projects seem to fall back to using modern or antique lenses, but I did come across two cameras from the same maker that took a more basic approach.

This large format camera is basically a pinhole camera, but with a stop right at the lens, yielding a focal ratio of about 90. Check out the flickr set, which includes both pictures of the camera and taken through the camera. This camera doesn’t include a focus mechanism, but since it is operating around f/90, it already has a great deal of depth of focus. It straddles the line between a pinhole and a conventional camera. But still, it creates some cool images.

The same maker created another awesome camera, but this one is a lot more awesome. The frame is wood, it has a focusing bellows, and takes a 4×5 film holder. The Flickr set for this camera shows some really awesome portraits, and one can tell it’s a lot more versatile and fun to use. Awesome, inspiring stuff.

Share Button

An antenna for receiving ADS-B…and velocity factor of RG-6

July 29, 2014 | Amateur Radio, Software Defined Radio | By: Mark VandeWettering

Allright, last night’s experimentation with the RTL-SDR dongle on my Raspberry Pi Model B+ was pretty successful. Incidently, I forgot to mention that this worked fine with the dongle plugged directly into the Pi, I didn’t need a powered hub. That’s pretty cool. Previously, I had experimented with decoding ADS-B signals from airlines. I thought this might be a pretty good thing to do with the Pi. I ordered a little MCX->Female F pigtail off of Amazon for under $6 shipped, and then thought about doing a better antenna. I would have also ordered a little case for the Raspberry Pi, but all the ones I could find for the B+ seem to be back ordered. Sigh.

Anyway…

I know that Darren at Hak5 and whixr at tymkrs.com had build colinear antennas out of coax for this purpose. I went to review what they had done before. It’s a pretty straightforward antenna to make. Darren has a nice video and writeup:

Darren’s How to Build An ADS-B Antenna

I was curious though: his discussion of velocity factor ended with… our velocity factor is 0.85. That might be true for his coax, but how do we know?

Well, we could trust the manufacturer. Or we could guess, based on the material that we think the dielectric is. But I think I’ll use my MFJ antenna analyzer to figure it out. The basic idea is to take a length of coax of length L. Sweep up from the low frequency and find the lowest frequency where the coax is resonant (where it is a pure resistance, which will also likely have the lowest SWR). Say that frequency is f. if you divide 300 by the frequency in megahertz, you should get the wavelength in free space in meters. But in the coax, four times the length of your coax is the wavelength in your coax. So, if you divide that length by the free space length, you should get the velocity factor of the coax.

When I get some coax, I’ll try this out. Getting this length right is probably pretty important. I might also try to run some simulations to find out how systematic changes in fabrication affect the performance.

I’ll probably do a 8 or 12 element colinear. I suspect that without an antenna analyzer that can go up that high, fabrication errors for more elements will lead to dimininishing returns and ultimately maybe even diminishing performance.

Addendum: A nice video showing good construction technique…



Share Button

RTL-SDR on Raspberry Pi…

July 28, 2014 | Raspberry Pi, Software Defined Radio | By: Mark VandeWettering

Just a quick note. I have been meaning to try out the combination of the Raspberry Pi with one of the popular $20 RTL-SDR dongles, to see if the combination would work. I was wondering how well it would work, how hard it would be, how much of the available (small) CPU power it would use. The short answers: reasonably well, pretty easy, and maybe 20% for rtl_fm. That’s pretty encouraging. I’ll be experimenting with it some more, but here’s a short bit of me recording KQED, the bay area PBS FM station, using the pitiful tiny antenna that came with the dongle. It should be noted that my house is in a bit of a valley, and FM reception in general is quite poor, and I recorded this from inside my house, which is stucco and therefore is covered in a metal mesh that doesn’t help. Not too bad. I’ll work out a better antenna for it, and then try it more seriously.

Addendum: Here is a page with lots of good information on RTL-SDR/dump1090 on the Raspberry Pi.

Share Button

More musings of computers past: Popular Electronics, the COSMAC ELF

July 27, 2014 | Retrocomputing | By: Mark VandeWettering

Screen Shot 2014-07-27 at 5.14.34 PMMy musings about my earliest memories of computers brought me back to 1976 and the appearance of the COSMAC ELF in Popular Electronics. I was only twelve, and probably had only the vaguest understanding of what such a machine could do, or why I might want one, but I remember reading these articles and it capturing my imagination. It probably laid the ground work for my continuing fascination with computers. Still, I never really developed any serious understanding of the architecture.

Now, when I stare at datasheets for the 1802 processor, it doesn’t seem particularly hard to understand, but it’s a pretty peculiar little chip. Wikipedia has a good introduction. It found some significant usage aboard satellites and space probes, including aboard the Galileo probe. You can get more documentation here. While the processor is not well known now, it actually generated significant interest until the 6502 hit the market.

I’ve just begun to dig around for more information:

Four articles that appeared in Popular Electronics on the COSMAC ELF
A JavaScript simulator the ELF

Addendum: There is a huge pile of RCA 1802 code on archive.org. Judging by the filenames, not all of it is 1802 related, but there is a ton of stuff.

Share Button

My programming career began with this magazine article…

July 25, 2014 | My Diary, My Projects | By: Mark VandeWettering

From tiny acorns, giant oak trees grow. Likewise, seemingly trivial events and items can affect our lives.

As a kid, I had been interested in computers for a while. I think it must have postdated the appearance of the Altair 8800, which debuted in Popular Electronics in 1974 (I would have been ten or so then), but I do recall reading articles about the COSMAC ELF computer in 1976 and 1977. Quite frankly, I don’t have the faintest clue why they attracted me. Perhaps it was just the idea that you could display a picture of the Enterprise on your TV screen (in horrendously blocky black and white), and that it wasn’t absolutely impossible to imagine that I could earn enough money to build one. Some interest in this old computer still exists, you can build a version of that old ELF with lots of upgrades. Seems like fun. But I digress. Constantly.

My first computer would actually be signficantly more powerful. In December of 1980, all of my savings from a year of yard work was pooled with some additional funds that Mom kicked in as a Christmas gift, and on December 24th, I got my first computer: an Atari 400 with 16k of memory, and a BASIC cartridge.

I didn’t even have a storage device. It would take a few more months until I saved enough money to get one of the Atari 410 tape drives. I began to plunk along with BASIC, writing programs to do simple things like adding numbers, and changing the color of the screen. I also got a copy of Star Raiders. And I began to wonder, why were the BASIC programs that I was writing so… pitiful, compared to what was possible. I had begun to read articles from the computing literature of the day that hinted at things like “player-missile graphics”, and I knew a tiny bit about machine code.

This all changed with game called “Shoot”, published in Compute! Here’s a link to the article. It was like having a pocket watch, and knowing what the time was, but then one day levering the back of the watch open, and revealing the mechanisms inside. It was the source code to a game that was simple, yet clearly beyond what I was accomplishing with my forays into BASIC programming. It had the complete assembly code, available for inspection. I dutifully typed in the code, and played the game for ten minutes or so. But the real game was the code! Reading it over and over again, I learned a lot. I experimented more. I got the Atari Assembler cartridge, and then ultimately got MAC/65, a much more powerful macro assembler. I experimented. Tweaked. Hacked. Learned. And it never really stopped. Thanks to Compute! and John Palevich.

Share Button

Learning the ropes…

July 24, 2014 | Retrocomputing | By: Mark VandeWettering

Over the past few years, I’ve expressed an interest in the AGC, or Apollo Guidance Computer. If you haven’t had the time to look at it, the Wikipedia page is good enough to get a brief overview, but if you want to dig deep, you can find all sorts of information and simulators.

I found myself looking back into it this week for two related reasons: one, the anniversary of the Apollo 11 landing, which I still remember, and because of a new argument that I’ve read (but won’t dignify with a link) that claims the moon landings were fake because the AGC could not have worked. But I must admit, he pointed at one bit of the AGC, its core rope memory which he claimed couldn’t work. I think the safer claim would be that he didn’t understand how it worked, but when I thought about it, I realized that I didn’t really know how it worked either. And that bothered me, so I thought I’d dig into a bit more.

Here’s a brief, high level introduction:



The basic idea is pretty simple, and relies on using ferrite toroids as transformers. Imagine that you have two wires going through a ferrite core. If you send a pulse in one wire, it will generate pulse on the other wire. This principle is used in all transformers, which may vary in the number of turns to step the voltage up and/or down by varying the number of turns through the toroid. You can generate a simple memory using this principle. This kind of memory is demonstrated admirably by SV3ORA who created a 70 bit ROM that serves as a decoder for 7 segment LEDS A pulse (or pulse stream, even better) on one of the ten input lines generates the appropriate set of output voltages to display the corresponding numeral on a 7 segment LED display. His webpage has some nice, easy to follow circuits, and a cute little video of it working.

But if you look at the diagram for the Apollo Guidance Computer, it looks a little different. It has a series of “inhibit” lines that weave in and out of the cores, in addition to some sense lines.

Screen Shot 2014-07-24 at 8.35.03 PM

The first description I found was around page 90 of this report, MIT’s Role in Project Apollo, Volume 3. But to be honest, I didn’t really understand it. Luckily, I found what appears to be a better description: P. Kuttner’s 1963 paper, The Rope Memory — A Permanent Storage Device. I still need to work through the details, but it makes a lot more sense to me. I begin to see how the address decoding works. I’ll ponder it a bit more, but it is beginning to make sense, and as it makes more sense, I see it for the clever bit of engineering it is. It was a remarkable bit of engineering in its day, and allowed densities of 1500 bits per cubic inch, including all the address decoding. Very cool.

Addendum: Hacker friend Jeff Kellem was unable to post a comment to this blog (got trapped by the spam filter, no doubt because of the high number of links, which normally indicate spam, but in this case indicates SCIENCE!) but he was kind enough to drop me an email with additional reading. I’ll reproduce it here:

You might find this July 1976 issue of BYTE magazine interesting:

Coincident Current Ferrite Core Memories
https://archive.org/stream/byte-magazine-1976-07/1976_07_BYTE_00-11_Core_Memories#page/n7/mode/2up

Also, maybe check out:

Magnetic Core Memory Systems
http://www.cs.ubc.ca/~hilpert/e/coremem/index.html

Ferrite CorePlanes and Arrays: IBM’s Manufacturing Evolution
http://ibm-1401.info/IBMCoreArraysIEEEMagnetics1969.pdf

And start with Volume 1, Issue 2 (May 1973) of Amateur Computer Club Newsletter, there’s a several part series titled “Core for Stores” in there:

http://www.smrcc.org.uk/members/g4ugm/acc.htm
http://www.smrcc.org.uk/members/g4ugm/ACC/Vol1-Issue2.pdf

Look forward to reading more about your exploration into core memory.

fyi.
-jeff

All very cool resources. All us old-timers probably remember Byte magazine (but to be honest, I didn’t recall that they had ever had an article addressing core memory) but I had never actually heard of the Amateur Computer Club newsletter. It’s deliciously old and homebrew. The description of core memories is great, it includes some of the drive circuitry that one would have built back in 1973. I’ll have to check it out further.

Addendum2: If you want to go to a lot of trouble (and per bit, a huge expense) to make a core memory that can be read by your Arduino, Wayne has a lot of advice and detail on his page.

Share Button

I got it! I got it! I really got it!

July 23, 2014 | Baseball | By: Mark VandeWettering

I haven’t had much of a chance to get to ballgames this year. I normally go to about a dozen or so A’s games during a typical season, but this year I basically haven’t made it to any. Life has just filled up with other things to do. But last night, the mystical forces of the diamond converged in the form of a pair of free tickets and a free parking night at the O.co Coliseum. Athletics vs. Astros, woohoo!

It was a beautiful night for a ballgame. Temperature was in the mid sixties or so, with very little wind. At first pitch, it didn’t seem like there would be a very large crowd. There were lots of empty seats. I guessed that fewer than 10,000 fans were in attendance, which was actually kind of okay with me. I like the relatively lay back atmosphere of these mid July games. But as the game wore on, more and more people began to sit down. Checking this morning, official attendance was 22,908. Not too bad.

A very nice game all in all. The A’s gave up 2 runs in the top of the third, but scored in the bottom half and again in the sixth to tie the game. It remained that way until the end of regulation, but L.J. Hoes would end up hitting a home run for the Astros in the top of the 12th, and the A’s went down 1-2-3 in the bottom half.

Ah, but I’ve buried the lead.

In all the years that I’ve been going to ballgames, I have never come away with a foul ball. I have been hit in the head by one, but my slow reflexes, and the near concussion meant that I didn’t come up with the ball on my one best shot at getting one. But last night, I finally did it, in the most surprising way.

Carmen and I were seated in row 29 of section 113, which is directly (but far) behind the visiting teams dugout. The top of the third had just ended, so I was just sitting there, checking my phone when…. suddenly people around me are excited. I look up just in time to see a ball, which literally landed in my lap, bounced against my chest, and stopped. I’m guessing that one of the Astros lobbed this ball trying to get it to the very cool pair of Astros fans in row 20 or so, but had misjudged. And so, this time without the threat of head injury, I got my first game ball:

ball2

ball1

Awesome! Achievement unlocked.

Share Button

Happy Birthday, brainwagon!

July 21, 2014 | Blogging | By: Mark VandeWettering

bw-birthdayOn this date back in 2002, I started this blog. Since that time, I’ve published 4019 posts, with a total of 725,146 words. I hope some of you have enjoyed it. I’m slacking off, but still find stuff that I think is fun, and hope you drop in from time to time to read and drop me a comment.

Share Button