G3ZJO just posted a nice little blog entry about the use of LEDs (which are nearly ubiquitous) as varicaps (which are often harder to find). Lots of people are using these in their QRSS beacons, no doubt in part to the work of Hans Summer which was the first person to bring them to my attention. Someday, I’ll begin to work on electronics again, and I’ll make use of this information.
Here is a nifty little page that I haven’t had time to absorb, but it gives some good strategy hints on playing Connect Four. Saved for later perusal.
I’ve been wanting to put a ham radio into my car for quite some time. The obvious thing would have been to get a nice 2m/70cm dual bander, but I didn’t really want to invest a huge chunk of change into it at this time, so I decided to go with a simple basic 2m rig. My idea is that even if I decide to upgrade later, having an effective 2m rig lying around isn’t a terrible thing: I could use it for a base station radio, APRS, or even an uplink transmitter for the FM satellites.
So, I settled on the Yaesu FT-1900R. It’s inexpensive ($129) and I’ve had pretty good luck with Yaesu equipment. It was also the subject of a review in the May 2010 issue of QST which just arrived. (Incidently, that review is terrible. It is mostly a list of stuff that you can read directly from the product manual that you could just as easily download from Yaesu directly. I expect reviews to include some insightful commentary, not merely a laundry list of features. Additionally, the author chose to power this radio using the cigarette lighter adapter in his car. Everything I’ve read, from radio to car manufacturer guidelines tell you not to do this. It doesn’t lend a whole bunch of faith in the capabilities of the reviewer.)
I’ll probably bench test this radio when I get back home from Portland. Perhaps I can provide a more helpful review.
I’ve been interested in techniques where amateurs can digitize images and models for quite a bit. This website percolated to the top during today’s relaxing web browsing: it’s pretty spiffy, and is interesting on a couple of fronts, not the least of which is that the author designed the gearbox for tracking a laser using a CAD program which drove a CNC mill so that the parts could be cast. Very slick.
The game Connect Four is a pretty neat little game, which was solved back in 1988 by two different individuals: James Allen and Victor Allis. It makes a pretty nifty benchmark too, called Fhourstones. My modern desktop evaluates over 10 million positions per second, and can solve the entire game in about 3 minutes. Pretty cool.
A couple of years ago, I mused about an “easy” checkers problem which my checkers program Milhouse found to be pretty difficult. Here’s the position again, with White to move and win:
(I didn’t mention the source of the puzzle before, I got it out of one of Rob Pike’s books, not sure which one. It’s puzzle 104 in my collection, but listed as Rob Pike Puzzle #8). As I mentioned before, it takes a fairly deep search to find the solution (28 plies). I had mentioned before that using the MTD(f) algorithm proposed by Plaat would find the solution very quickly, but that my implementation of alpha beta struggled significantly. I thought I’d try this position again to see what headway has been made in two years. Using the normal settings… here’s the output of iterative deepening….
+1 : [-1038] 0.02s : 15-10 +3 : [-1044] 0.02s : 18-14 30-26 14-10 ... researching after failing low (-1070) +5 : [-1070] 0.02s : 15-10 32-27 19-23 28-32 23-19 +7 : [-1086] 0.02s : 18-14 32-27 15-10 27-23 19x26 30x23 10-15 +9 : [-1106] 0.04s : 18-14 30-26 15-10 32-27 19-15 25-30 14-9 28-32 9-14 +11 : [-1118] 0.13s : 15-10 32-27 19-15 12-16 18-14 8-11 15x8 3x12 14-9 : 28-32 9-5 +13 : [-1133] 0.53s : 18-14 32-27 15-10 27-23 19x26 30x23 14-9 25-30 9-5 : 31-27 10-6 28-32 6-1 +15 : [-1143] 2.18s : 18-14 32-27 15-10 27-23 19x26 30x23 14-9 25-30 9-5 : 30-26 10-14 31-27 14-10 28-32 10-15 +17 : [-1143] 8.53s : 18-14 32-27 15-10 27-23 19x26 30x23 14-9 25-30 9-5 : 30-26 10-14 28-32 ... researching after failing low (-1168) +19 : [-1168] 46.67s : 15-10 32-27 18-14 28-32 14-9 31-26 9-5 8-11 5-9 : 17-22 19-24 3-7 10x3 27-23 9-5 20x27 +21 : [-1176] 143.75s : 15-10 30-26 18-22 32-27 22x13 8-11 10-14 27-23 13-9 : 23x16 9-5 31-27 5-1
In other words, after two and a half minutes of searching, the computer still thinks white is screwed, with a score of -1176.
I wanted to go back and re-enable the MTD(f) algorithm in milhouse, but somehow through the 100 or so modifications that I’ve done since then, I’ve removed that code entirely. But I thought that I might make some small modifications to the program that would significantly enhance its ability to solve this puzzle. Milhouse normally uses a windowed search to find the best move using iterative deepening. It searches at increasing levels. Each time, it sets alpha and beta (the search window) to be centered around the current value with a search window witdth of about one quarter of a man. If the search returns inside that window, we know its precise value. If it returns less, we “fail low”, and if we really wanted to know what the value was, we’d have to re-search to find it by using different bounds (I normally use -INF to the new failed bound). Similarly, if it fails high, we might need to research.
But in our case, we don’t really care if we fail low. We know if we do that no winning solution can be found at that depth. There is no reason to try to find the exact value, we may as well just deepen the search and try again. So, I made some minor modifications to the iterative deepening procedure. I basically set the alpha cutoff to a high value (say 1000), and if we fail low, then we just continue without researching at the next depth. This is very similar to what mtd(f) does, except that it uses null-width searches. Here’s the trace:
... researching after failing low (-1038) ... researching after failing low (-935) ... researching after failing low (-926) ... researching after failing low (-818) ... researching after failing low (-722) ... researching after failing low (-715) ... researching after failing low (-616) ... researching after failing low (-602) ... researching after failing low (0) ... researching after failing low (0) ... researching after failing low (0) ... researching after failing low (0) ... researching after failing low (0) ... researching after failing low (222) +29 :  1.76s : 19-24 20x27 18-22 17x26 15-10 8-11 10-6 3-7 6-9 : 12-16 9-6 16-19 6-2 19-24 2-6 11-16 6-9 16-19 : 9-13 19-23 13-9 7-10 9-13 10-15 13-17 15-18 17-22 final score = 9973 1.757 seconds elapsed.
In other words, in less than two seconds, it finds the winning line. Very cool.
While mucking around this morning, I bumped across the Sixty Symbols website, something I hadn’t seen before. It describes itself thusly:
Ever been confused by all the letters and squiggles used by scientists?
Hopefully this site will unravel some of those mysteries.
Sixty Symbols is a collection of videos about physics and astronomy presented by experts from The University of Nottingham.
They aren’t lessons or lectures – and this site has never tried to be an online reference book.
The films are just fun chats with men and women who love their subject and know a lot about it!
It’s worth noting many symbols have multiple uses across scientific disciplines and we somtimes tackle them from an unexpected viewpoint.
Click on “gamma” and you’ll find a professor of physics talking about cricket balls… Click on “rho” and we’re stuffing paperclips into coffee cups.
And sometimes when there’s no symbol to tell a story (like Schrödinger’s cat), well we just make one up!
However whatever symbol you click on, we hope you’ll see something interesting and maybe learn something new.
As example, check out their explanation of the drinking duck:
Addendum: The original patent didn’t include the most important part of the drinking bird: his ubiquitous top hat!
Yesterday I was in our Atrium, and Craig had his iPad with him. I got into a discussion with him and Loren about why I thought the device was very cool. (I also told them why I had been actively discouraged from becoming an iPhone developer earlier, but that’s a story for a different time.) But I brought up a point which I’ve been kind of mulling over ever since: the iPad (and iPhone) may be the first not-completely-user-hostile-interface ever delivered in a consumer electronic device.
Consider your TV set and its remote as a counter example. On virtually every television ever made, there are two ways to select what channel you wish to watch: by going up and down, or by entering a channel number. There is no more reason to select channels this way than there would be for you to access your online photos by number, or by paging through them one at a time. And yet, for decades, no TV manufacturer has sought to upgrade this very simple, basic interaction that you have with a television. Is there really any reason at all for you to use channel numbers for anything? They add ten buttons to your television remote. Wouldn’t it be nice to remove that space, and use it to make the buttons you actually do use larger, and differentiated so you could actually use them in the dark? And how about those buttons to scroll up and down lists? Wouldn’t you just like to point at what you want? The Nintendo Wii uses a nice IR sensor system to select items: imagine of that technology were merged with your remote, along with accelerometers and the like. If Nintendo can profitably make these as options in a $250 game system, you’d think that that kind of technology could be used in other consumer electronics.
And don’t even get me started about devices like Blu Ray players. Why do I need a different remote? And a completely different system of conventions and menu selections. Bleh.
Okay, I’m getting a bit astray. The iPad has already demoed some really, really nifty applications with interesting, intuitive interfaces. But while I have defended the iPhone/iPad as a consumer device, I am not really all the enthusiastic about doing my own programming and experimentation with such a device. So, I thought I might throw out this question: imagine that you were going to prototype improved interfaces for media devices. Are there any open source options that are worth considering, or are they all terrible? I’m considering a platform like the Acer Revo, hooked to a large screen TV using HDMI, and possibly some wireless bluetooth devices (or maybe just the Wii remote). Anyone have any experience/success with this kind of interactive UI programming?
I sent a copy of this as a letter to Joel Johnson @ Gizmodo. I have no pre-existing relationship with Joel, but was dismayed by their treatment of the next-generation iPhone release, and thought to express some of the reasons behind my general displeasure.
I thought I might drop you this little missive to express a tiny bit of the disappointment that I have with you and your fellows in your treatment of this new iPhone issue. I think that you are missing something very important, and I thought I would take the time out of my day in the (almost certainly vain) hope that you might consider the actions of Gizmodo in a different light.
Here it goes.
You guys run a gadget website. I’m frankly part of your target demographic. I own lots of gadgets, I like lots of gadgets, and I spend a fair amount of time reading about possible gadget purchases. Your reviews and discussions are frequently helpful in making my buying decisions. But let’s get real for a moment: you guys make a living by providing a venue for those who make gadgets to get attention from those who buy gadgets. This requires that you walk a rather fine line. You must provide accurate, reasonable information for consumers, or nobody will bother coming to gizmodo.com. And you must provide reasonably positive reviews of products, because no company would bother advertising on a site which gave their consistently negative reviews.
But here’s the funny thing: Apple doesn’t really advertise on the web. They prefer to use print media and television for the most part. So they don’t directly pay you for advertising. So guess what? They don’t really need you. And that means they can dictate whatever access they grant you on whatever terms they like, and you will suck it up because all the people that do use you for advertising want to hang onto the popularity of apple products to get advertising views.
In other words, you need them way more than they need you. You cover Apple products in spite of all the irritation that it might entail because it makes you money.
Another way that you can choose to make money is to traffic in gossip and rumor. You might even argue that it’s for the benefit of your readers. Heck, we all like to engage in this kind of thing. “What will the next iPhone be like?” Heck, I’d like to know. My 2 year contract is about up, it’d be great to see what was coming down the line. So, you guys write editorials speculating, and you go out into the industry and try to snoop to find out what’s going on. And, of course you try to encourage relationships with people “in the know” who might tip you off.
There isn’t anything wrong with that. The people who provide tips are grownups, and presumably can make decisions about the risk that they are willing to take in revealing their companies secrets, and can take whatever measures they think appropriate or necessary to ensure their anonymity.
But in this matter, you’ve taken that option away from Mr. Powell. He didn’t choose to reveal a company secret to you, or to sell you an Apple prototype. You acted ruthlessly, paying an (as yet anonymous) third party for access to a prototype which you knew was not their property, without regard to whom it might hurt, and shamelessly and profitably exploited the information for your own benefit. I think this is way out of line.
You can sit and pretend that outing Mr. Powell was for his benefit, and perhaps you are right. But what truly would have been to his benefit would have been to convince your anonymous third party to turn Apple’s property back over to them, and to not shamelessly exploit this information for your own benefit.
Shame on you.
I’m posting a copy of this letter to my own blog at http://brainwagon.org.
G4ILO had much better success than me with similar equipment. His mp3’s are way more convincing than mine.
Or should that be maybe? Arecibo?
In the world of visual astronomy, it is well known that your eye’s peak sensitivity to light doesn’t occur right when you are staring at an object directly. To detect the faintest galaxies, you must stare slightly away from where the object is, and you’ll see it pop into view. This technique is called “averted vision”.
This leads to another phenomenon, which is detection through the use of “averted imagination”. You know a faint fuzzy is there, so you imagine that you see it.
While KP4AO was operating CW from Arecibo, I took my naked FT-817 outside with the cheapie 11 element cheap yagi I made, and aimed it where I determined the moon should be. And I heard… well…. something. I’d normally drag my laptop out with me to record, but I had my little Sony digital recorder, so used that instead. Here’s the audio, transferred back to my laptop (the file is a .WAV file, uncompressed, but the Sony does do some compression, so chances are this is slightly less legible than what I heard at the headphones):
I heard a few characters scattered in there, but maybe it’s my imagination. Tell me what you think!
Sigh. Not hearing anything from the Moon.
Tuning into the live ustream.tv video/chat line it appears that lots of people with much larger/better setups than me are also having difficulty hearing anything, so I am not the only one. Given that under the best of circumstances, my unproven antenna would have barely enough gain, I think this attempt has been somewhat unsurprisingly unsuccessful.
Sarah was nice enough to at least snap a picture of my standing out in front with my antenna, looking goofy.
Addendum: It appears that they didn’t have their power amplifier up today, so were operating at a lower power level than was originally hoped. With luck, they’ll have that fixed and will be +13db tomorrow by comparison. I suspect that still might not be enough. But I’ll try again tomorrow, just in case.
So, my barely finished antenna got just one basic operational test last night. I monitored the pass of SEEDS II, which was squawking in Digitalker mode on 437.485, which is somewhat higher than the nominal frequency this thing is tuned for (it’s centered for operation around 432). I haven’t really tried SEEDS II in a while, so I can’t compare the relative signal strength, but at times the signals were quite strong, but I seemed to have difficulty tracking and had to keep rotating the antenna (polarization seemed to be shifting alot). I can’t remember its output power, but it must be quite low (well under 1 watt, 100mw wouldn’t be far off). I hadn’t heard it in FM/Digitalker mode before, so I wasn’t ready to record it, but I suspect I would have not had a great recording because of fading.
In about 30 minutes, a fifteen degree pass of AO-51 out over the Pacific will occur, and then in about two hours, Arecibo operations should begin. I’m carting my FT-817, a pocket recorder, some headphones and my antenna (probably toss in the Arrow too) and bringing them to work, and hoping I can hear something during a coffee break. Stay tuned.
Will I hear anything from the moon tomorrow? Your guess is as good as mine.
Apparently the Arecibo dish will be streaming video from their location during the Moonbounce event over the next few days. Check out the following link, or the embedded video stream:
Free TV : Ustream