Re: the $1 notebook

January 16, 2025 | My Projects | By: Mark VandeWettering

It’s kind of amazing when things that you have been thinking about for a while come together and make you think that the world is trying to tell you something (or perhaps something that you have been trying to tell yourself). A couple of days ago I posted quick link to a short video that I did about a tiny craft project that I did this week: making myself a small notebook. I have long been interested in bookbinding, and the gift of a 1905 copy of a rather crufty copy of The Whitehouse Cookbook seemed like a reason to start down that path in earnest. At some point I’d like to rebind the cover and replace the and repair some torn sheets, and given that it’s not especially valuable, I thought it would be a good first attempt. Cutting and sewing some signatures seemed like a good way to start, but amidst this, I watched this video:

And it got me thinking…

First of all, a good friend of mine has been carrying a small pocket notebook for as long as I’ve known him. Inside he scribbles all sorts of little bits of intellectual flotsam and jetsam, and it is not uncommon for us to be having lunch and for him to pull out his notebook and add some notes.

And I keep thinking that I should be doing the same.

In the modern world, it’s exceptionally easy for your attention to be distracted by your phone or your computer. I spend hours each day in front of one or the other, and while I often start with a particular purposeful goal in mind, it’s dreadfully easy to have your attention pulled away by something these remarkable devices choose to show you, and then you never get back to what you were intending to do.

Consider yesterday as an extended example: I went to my computer to spend a finite amount of time (I scheduled an hour) to learn something more about the very first of understanding the ActivityPub tech that underlies Mastodon. One of the first things that you learn is that the “webfinger” protocol forms a means for someone to discover some basic information about you, which will be needed to create an Actor in the protocol, which is the basis of identity and would allow you to exchange information on the Fediverse. So, I did some simple web searches to find how I could do that/implement that.

You can uncover the basic website that tells you what webfinger is. Basically, it’s the means to have a standard URL that can give people information that you’d like to share. The cool thing is that you can use this identity in a way which provides some independence from any particular social media network gives you as your identity, but you can create aliases which will link to different identities. This is especially powerful when you use it with sites that are powered by ActivityPub. (Or so the idea goes, I’m still learning about this.)

Had I been clever, I would have uncovered this webpage first. Or perhaps I did. I already have a website hosted via Github Pages that would seem like an ideal way to create a presence and experiment. It is, after all, already being served on a domain that I always have, and creating even a static file would be likely adequate, and in any case, I’d learn a lot by doing it.

But instead I started by thinking about possible plugins for WordPress that I could host here, on this blog.

This was a mistake. I ended up searching multiple plugins, and in the course of this, I made a startling discovery: that someone had managed to hack into my WordPress blog and had modified the wp-config.php to serve up links to their cryptocurrency wallet product.

And, well, that was the rest of the afternoon.

I mean it’s good that I noticed it. It’s kind of annoying to think that my blogs dozen or so daily visitors might be co-opted into shilling for a cryptocurrency wallet, but…

You see, this post is already been lead astray again. We were talking about notebooks.

A notebook has certain advantages over sitting at your computer and surfing:

  1. It’s intentional. Nothing in your notebook happens by accident. If something is in your notebook, it is because you decided that this is important and I want to revisit this again some time in the future.
  2. Nothing is in it that you didn’t mean to put into it. You won’t suddenly get distracted by a political discussion you didn’t want to have, or a new dance craze on Tik Tok, or even the latest Raspberry Pi news when you really want to be thinking about ActivityPub or hugo or bookbinding.
  3. It serves as memory. I’m getting older, and I must admit that keeping track of the millions of things that I need to deal with requires that I make lists. Lists of things to do. Lists of things to look up. List of projects that I briefly considered, but couldn’t begin right away. Sometimes these fleeting ideas are gone with the very next disruption in my thoughts, but if they are written down, I can go back and see them again.
  4. Nobody else’s messages interrupt me on my notebook. There is no algorithm that substitutes what it thinks I should read for what I think I should read.
  5. You don’t have to share your notebook. I probably overshare a bit on social media and even here on my blog, but my notebook is entirely for me. Most of it is probably only meaningful to me. That’s okay. In fact, it’s better than okay: it’s desirable.
  6. Our attention is valuable, and we give a lot of it away to social media. The attention we spend in our notebooks is for our consumption and benefit. Perhaps some of it will be turned into things of value for others, but we all could benefit from marinating on our own thoughts and ideas a bit before expressing them to the world.

These are not an exhaustive list. If you find keeping a journal or notebook valuable, I’d love to hear about your experience.

My own personally bound 32 page book will be my companion for the foreseeable future.

Oh, just remembered: “check to see whether commenting on brainwagon.org is fixed.” Next on my list.

Notes re: WordPress vs. Hugo

January 15, 2025 | My Projects | By: Mark VandeWettering

Back on May 2, 2024, I was aboard a plane heading toward a real vacation: ten days spent on a cruise and visiting friends and family in Florida. While on the plane, I jotted these notes in Markdown, detailing some of the reasons why I was considering switching my blog (now past twenty years old) into a site which was generated by a static site generator. Shortly after this trip, I was laid off from my job at Pixar Animation Studios, and so I haven’t been back to revisit my thinking. But in rereading this for the first time since I wrote it, I think I had obviously done some thinking about it, and it was helpful to bootstrap additional pondering. Without further comment, here are the notes as I crafted them back then…

Notes taken while on a plane…

As we speak, I’m sitting in the very cushy “MINT” level seat on a Jet Blue flight, hurtling across the United States on my way from SFO to TPA by way of JFK. My wife Carmen bought me this special seat because I’m at the start of a ten day vacation in celebration of my 60th birthday. This trip includes visit with my son and his family, a cruise out of Port Canaveral, and some time at DisneyWorld. Truly, it should be a great trip.

Since I have this (very cushy) seat, with plenty of room, I’m actually able to comfortably use my laptop. Toward that end, I’ve decided to try to learn a bit more about static site generators. I’m aware of three or four utilities that serve the purpose.

  1. mkdocs — written in Python (which is a plus)
  2. Jekyll — which is written in JavaScript, and is probably the most popular open source solution of its type
  3. hugo — which is written in Go, and which I decided to try to give a whirl

Luckily, this flight has (pretty good) wifi, and I was able to use apt to download and install the necessary bits of software and get it running.

During my bout with COVID-19, I had some time to sit and think about my dormant blog, brainwagon.org. A couple of things have things came to mind.

I really don’t care much for WordPress

I haven’t been doing necessary maintenance on my blog, and as the result a couple of times during the last six months, it has gone off line, once requiring me to contact my ISP to get an upgrade of PHP that happened and caused it to bork because of a misconfigured plugin. Even remembering how to log into my server and dork around with this stuff seemed more complicated than I would have hoped.

The reality is that the total of text that I’ve written over the years is just around 20Mbytes, almost all of it written by me. While I suspect that during it’s heyday I may have had a modest number of regular readers, it never really formed a community. The overwhelming majority of comment traffic that I got on the blog was frankly comment spam. In fact, were it not for the combination of comment spam and the performance tax that running a complex database system fueled by one of the worst programming languages ever created, I suspect that I could run my blog entirely on a little Raspberry Pi Zero and a 4Gb microSD card, hosted very nearly anywhere on the planet.

I’ve known this for a while. So… the question is why don’t I just do that?

That’s what this experimentation with hugo and other static site generators is all about.

Exporting the WordPress blog

Nevertheless, I didn’t want to lose my entire history of writing. While I didn’t think that losing the ability
of others to comment on my blog was necessary, I did want to continue to have a representation of some of the early work and exploration that I’ve done over my decades online. This meant being able to export the mass of WordPress XML into a more portable Markdown syntax, so I could integrate it into another static site generator.

The plugin wordpress-export-to-markdown by lonekorean seemed to be just the thing. It took the XML dump of my WordPress blog and carefully reconstructed some quite reasonable looking markdown as well as downloading all the linked images that I had. The result was about 600Mbytes that distilled my online musing over the past few years.

One thing I did notice was that many of my image links were in fact broken. I had moved my blog a couple of times over the years, and it seems likely that at some point in the past I failed to properly move the images subdirectory and had lost a lot of early images. It remains to be seen whether the Wayback Machine may have cached some of this content and might be able to recover some of this, but that is the task for another day.

The brainwagon wiki

I briefly dabbled with using DokuWiki as part of brainwagon. All the complaints that I had re: the brittleness of PHP and overkill that I lodged against WordPress could be equally lodged against DokuWiki. I wanted a relatively simple place that I could arrange and organize various bits of information and links to other places on the web, but DokuWiki seemed to be another order of magnitude more complicated than I wanted. But it did have a number of useful features:

I generally liked the notion of using Markdown. I’m seriously old school. Around 40 years ago, I got my first
Unix account on a machine at the University of Oregon, and learned how to use the screen editor vi. Since then, I’ve seen editors come and go, but frankly I’ve never found that their enhanced capabilities were significant enough to make me wish to adopt them. The fifteen or so commands that I understand how to use in vi make editing text both fast and efficient, as long as the subject matter you are editing is more or less basic ASCII text. This reinforces the use of Markdown as the source language. You can concentrate on what you are writing, and leave “how it will look” decisions for later.

When I actually looked at the sum total of links that I had stashed in DokuWiki, it appears that the amount is actually pretty limited. Converting them over to markdown sufficient to integrate with Hugo even by hand should present no more than 15-20 minutes of work.

I don’t like WordPress Part II

There is another reason that I don’t much care for WordPress. It’s kind of shitty for people like myself who
use the nominally “free” version.
In the time I’ve been on the web and nominally blogging, WordPress has gone from a basic open source blogging platform to big business. And since it is big business, it makes a lot of decisions about how to maximize value for the owners of the system, and relatively little time looking out for the little guy, or even asking themselves whether the decisions they are making is actually making the platform better for anyone. Decisions are made on the basis of whether a given change is more likely to cause someone using the free tier to the premium tier. Unfortunately, one of the ways that you can do that is to actually break the free tier. It is a vastly easier method to get people to send you money than to deliver consistent, reliable software at the free tier, and worthwhile, thoughtful additions to the premium tier.

I can hear some of you out there saying “but aren’t the free tier users just freeloaders?”.

No. No. No.

WordPress began as an open source project. This means that people contributed their work to the project with the hope of making a better blogging platform for others to use. But somewhere along the way, it somebody began to notice that people would actually pay money for a blogging platform. And look, here is a platform which has begun to be popular, so a company formed around it. That didn’t seem bad. In fact, I thought that the involvement of companies in sponsoring and shepherding open source seemed like a profoundly good thing to me for years.

But it’s clear I was mistaken. Recently Jeff Geerling wrote a thoughtful argument on this subject:

Corporate Open Source is Dead — Jeff Geerling

This strategy has a name: the rugpull.

It is known by another name which might be more familiar: the bait and switch.

It basically consists of the following:

  • Find an open source project that has an active community for a project which people find of value.
  • Build a product using that open source code base.
  • Accept new, important modifications under a “Contributor License Agreement” (CLA). After all, who wouldn’t want to contribute to a growing and useful open source project?
  • But CLAs aren’t open source. They exist only to subvert open source licenses by having you sign your rights away.
  • That’s the rug. Then, at some point when the corporate master wants make more money, they pull the rug out from you. The source code that contributors have been adding to the project to create value for users of the system is suddenly squirreled away from free access, and its only people in some paid tier who suddenly can access the code.

Somebody in this scheme is subverting the ideals of the original project, but it’s not the people who are using the software for free. That is, after all, the original point of making open source in the first place. It is the corporations who are the freeloaders in this scenario: harvesting the work of individuals and communities to reap privatized profits.

Okay, that was an aside, back to my simple blog…

Back to static site generators.

Well, perhaps.

But I can’t help but think that the layers upon layers of stuff that are piled up over the year has resulted in a kind of diffusion of content. The total size of the software that is used to maintain my modest 600Mb of content is vastly larger than content itself. To make even modest changes to WordPress or even wordpress themes requires significant knowledge in a bunch of domains, such as

  • knowledge of php
  • knowledge of SQL
  • knowledge of JavaScript

I know a little about almost all of these things, but none of them are my day job, and none of them have anything to do with the content that I’m interested in producing for my blog.

This is why static site generators appeal to me:

  • In general, they don’t rely on making your content “Turing Complete”. Your data is data, and data is usually simpler to deal with than programs.
  • The total size of the toolset used to generate sites is quite small. The compiled package for hugo is just 21Mbytes, and consists of a single executable, without any auxillary data files.
  • If you wish to do version control, you can use conventional tools which are familiar to software developers like git. While it seems a bit odd to complain that WordPress relies on a bunch of knowledge of tech, and then list using git for source control is a positive for hugo,

    I would submit that knowledge of source code control is a vastly more portable skill, and the simple fact is that there is nothing all that special about git. If you have knowledge of a particular source code control system, you can go head an use that. Over the years, I’ve used sccs, RCS, CVS, hg, perforce and git, and probably others that I’ve entirely forgotten. You could likely use any of those, or others if you wish. This is rather different than the use of php, SQL and JavaScript in WordPress.
  • Yes, hugo is written in a language go, which is perhaps even less well known than php. But for the most part you don’t need to know anything about go to use hugo.
  • The data format that forms the basis of hugo is Markdown. Markdown is just ASCII files, which are easy for humans to read, write and modify, with any tools that the writer finds convenient. It is also information dense: the author generally spends more characters writing about what they wish to write about, and less about what it looks like. Decisions about what something looks like are generally made in a separate place. This enhances portability and longevity of data. Once my blog was converted into markdown, it becomes relatively simple to move and modify: I had a first pass of this blog in just a few minutes, configured using a system I had never used before (hugo) in less than ten minutes while on an airplane and working on just four hours of sleep.
  • The use of a database for this kind of application seems like it’s probably overkill. Using files and filesystems to organize data seems like a much better notion, and requires less overhead.
  • If I wanted to move it to another system like mkdocs or Jekyl, it would likely take a similarly short period of time.
  • Even if every static site generator in existence went the way of the dodo tomorrow, it’s likely that a small team could write a generator that could read in my data and generate a new website could be written from scratch in just a few hours of work. Such a system wouldn’t be as feature rich or efficient as these generators, but it would be possible and much more straightforward than (say) writing something to ingest WordPress XML dumps.

Baby steps toward bookbinding…

January 14, 2025 | My Projects | By: Mark VandeWettering

I have always been a fan of books, and the craft of bookbinding has been particularly interesting. I have been trying to gather the necessary (modest) tools to get started, and today I cleared a small part of my garage workbench and made a small pocket notebook.

Nothing too exciting, this just documents something I tried for the first time.

Link to an SSTV decoder for the Pi Pico

January 6, 2025 | My Projects | By: Mark VandeWettering

In years past, I had developed some (largely academic) interest in slow scan television, and had implemented encoders for a variety of standard modes, using very straightforward C which I made available via my github page. But that was just the encoding side, not the decoding side, which I spent some time thinking about, but which I didn’t take the time to implement.

Today I ran across a decoder which was implemented on the Pi Pico, which (in the form of a Pi Pico 2 W) costs only about $7. Combined with an inexpensive 320×240 TFT display and a couple of resistors, it is can decode most of the images that my code could generate.

Bookmarked and saved for future reference.

SSTV decoder written for the Pi Pico

Cracking the Cryptic, January 2025 Patreon Challenge

January 5, 2025 | My Projects | By: Mark VandeWettering

I’m a bit of a puzzle fan. My wife and I do a daily gauntlet of puzzles, including Wordle, Quordle, Octordle, Stepdle, Worldle, and the New York Times Crossword Puzzles, both the mini and the regular one. It amounts to about an hour a day, with maybe a bit more on Saturday and Sunday, but it’s nice and relaxing, and as I get older the fact that my times for these still appear to be coming down, it lends somewhat of an antidote to the notion that you can’t teach an old dog new tricks.

I’m also rather fond of Sudoku, and am a fairly regular watcher of the YouTube channel “Cracking the Cryptic”, which I probably got more into during the COVID pandemic. I have since become a Patreon for the channel, which means that I can take part in the monthly Patreon Challenge, which is a series of custom designed puzzles just for Patreon members. This month’s challenge has a Hobbit theme, done by the talented setter Blobz, and consists of 19 puzzles that you must solve in order. It doesn’t hurt that my friend Jeff is also a Patreon member, so occasionally we get together via video conference and collaborate in solving puzzles, the hardest of which still usually takes us a couple of hours, even with our combined effort. But it’s good fun.

The puzzles in this sequence are ranked from one to four spider webs, with four spider webs being the most difficult. Puzzle #10, titled “Laketown” is the only one which was ranked with difficulty four, so we decided to get together this morning and collaborate on its solution. It was enormously pleasurable, and we finished in slightly less than two hours.

But before I embarked upon it, I wondered if it would have been faster to just implement a brute force solver in Python. I had the basis of a very simple program that solved sudoku that I did as part of Project Euler, a programming challenge/competition website that has hundreds of puzzles, much like the Advent of Code challenge I completed in December. Puzzle 96 was to write a Sudoku solver, and I still had the code. I solved it with a pretty rudimentary brute force solver with backtracking written in Python, consisting of 100 lines of very straightfoward Python. It was not exactly fast, taking several minute to grind through the 50 test cases that the puzzle presented, but I think it took me less than an hour to write (now several years ago), and it was pretty obvious how it works.

While I was waiting for our scheduled start time this morning, I wondered how difficult it would be to add the additional features it would need to solve this Laketown puzzles. In addition to the normal Sudoku rules, this had several “German Whisper” clues, where adjacent cells along green lines had to differ by at least 5, and several inequality clues, where certain cells had to be greater than the adjacent cells in specified directions. I didn’t bother making a general “compiler” for those, but just generated a single “legal” calls, which could be used to check if a particular digit in would be legal according to these constraints. This logic was not written in an efficient manner, and expanded the program to 225 lines. I set it running on the puzzle in my office just as Jeff pinged me to start our human effort.

The question I had: was the time I spent solving it by hand going to be less than the (estimated) ninety minutes it took me to implement the solver in Python? What would the runtime be?

After Jeff and I finished the puzzle in ~110 minutes, I went back to my desk. It had indeed solved the puzzle, but it did take 54 minutes. It appears that it searched a significant fraction of the entire search space

markv@victus:~/laketown$ time python puzzle.py
Lake Town
XXXXXXXXX [... answer deleted to preserve the puzzle for others ... ]
XXXXXXXXX
XXXXXXXXX
XXXXXXXXX
XXXXXXXXX
XXXXXXXXX
XXXXXXXXX
XXXXXXXXX
XXXXXXXXX

real 54m29.229s
user 54m29.157s
sys 0m0.010s

So, it appears that the time spent implementing the program was slightly longer than the time it took for me and my friend to collaborate the find the human solution.

I suspect that were I to dedicate an afternoon to improving the efficiency of the search algorithm, I could speed this program up by at least two orders of magnitude, and maybe more, probably using Knuth’s Dancing Links ideas. I could probably make it less error prone and quicker to implement additional constraints.

But in any case, a fun Sunday. Hope you all are having a good New Year.

Goodbye to 2024…

January 1, 2025 | My Projects | By: Mark VandeWettering

Hey readers, this post will be a bit unusual, as instead of being about some kind of cool tech or gadget, it’s going to be about me, and where I find myself entering 2025.

2024 was a bit of a mixed bag. There was in fact a lot of good things about 2024, particularly how it began. I turned sixty, and treated myself to an awesome birthday cruise with my wife Carmen, my best friend Jeff and my sister Kristin. This was after doing an amazing adventure to Mazatlan to see the solar eclipse. I totally recommend being under the moon’s shadow at least once in your life. It was an emotional experience unlike anything I’ve ever experienced. You can review some of the pictures and video that I shot here.

But it wasn’t all great.

Perhaps the most dramatic change is that Pixar Animation Studios, my employer for over three decades decided that my services (and the services of around one hundred eighty coworkers) was no longer required, and for the first time since I was 17 years old, I’ve found myself unemployed. While I had begun to consider retirement, I had hoped to make my departure on my own terms. Instead, I found myself locked out of my account that afternoon, people cancelled meetings with me, and feeling as if I had returned home to find my wife had put all my stuff on the front porch, had the locks changed, and filed for divorce.

Carmen hasn’t filed for divorce, incidentally. We are doing well.

I’ve spent the last six months drawing up resumes, and filing for jobs which have largely proven to be non-existent. It’s a bad time to be out of work in the film industry, especially if you don’t want to move overseas. The same forces which made Disney reduce their headcount are pretty much being expressed universally, and that makes finding a new job difficult.

But that isn’t all bad either. While I have not precluded the notion that I’m going to find a new job (hope springs eternal that something cool will cross my desk, and they will recognize my talent and experience) I’ve also begun to come to grips with the fact that financially I’m in pretty good shape, and perhaps retirement is in fact my future, so I’ve begun to prepare for that outcome. As I mentioned before, work has been part of my daily routine for over four decades, and trying to understand the change has been difficult. Prior to being let go, I had been working on trying to lose weight, mostly using the Noom app. I had lost forty+ pounds, which sadly I have found again in the time since July, along with a few extra. Part of my New Year resolution is to get back on track with that, and lose those same pounds that I’ve lost at least four times. That’s a bit frustrating.

On Christmas Day I had a bit of a bummer. I tripped in my living room and conked my head on the corner of a bookcase, which sent me to the ER for over twenty stitches. They should come out on January 2, and appear to be healing well. I have added “it’s pretty ad, we can see your skull” to the list of things that I don’t need to hear again on Christmas. Still, it was a probably more frightening for Carmen than it was for me, and a CT scan revealed that no significant damage had been done to the grey matter, but that they did note had begun to change in density, which is apparently a fairly common occurrence for people as they age. Great. My getting older can now be documented with advanced imaging.

We have also been struggling with a leak in the roof of the house, which is going to change into having our roof replaced, as it is near the end of its life. Sigh.

Oh, and I’m not especially pleased with the political situation either.

But, I’m making some headway:

  1. Back on noom to help control my diet.
  2. Working on a retirement plan/budget.
  3. While I miss the challenge of work, I no longer feel like I have to work to make them understand my value.
  4. Looking to downsize my home a bit, to make future changes less dramatic
  5. I’ve worked on removing myself from the anger machine that is most social media.
  6. I am still married to the woman I love more than anything.

I’ve got some other stuff in mind as well, but it’s all on a relaxed schedule.

I have no idea what 2025 will bring, but I am going to work on maintaining a positive attitude and have hope that even amidst change and turmoil, I’ll find some good things as well.

Hope you all do well in 2025. If you want to reach out to me, feel free either via my email or my accounts on Mastodon or Blue Sky.

I’m off to have lunch with my good friend Tom. Best wishes to you all.

Trying to understand the drama around WordPress…

December 23, 2024 | My Projects | By: Mark VandeWettering

I’ve used the open source version of WordPress for some twenty years. In general, I’ve been pretty happy, although not without some misgivings, mostly technological, but increasingly ideological as well. There has been a trend over the last few years where the conflict between open source software and commercial entities has become seemingly problematic. For the modest purpose of my blog, I’m merely considering finding a less problematic bit of software to use, but I found this video on Youtube that attempts to explain the controversy. In 2025, I think finding a less popular/less problematic platform will be something that I try to do. What do you all think?

Another chapter in the “I’m dimwitted” theme from Advent of Code, 2024…

December 19, 2024 | My Projects | By: Mark VandeWettering

Warning, spoilers ahead for those who are still interested in doing the problems themselves.

Part 1 of Day 19 was pretty simple, really. You could go ahead and read the specification yourself, but basically you have a relatively large number of text patterns which consist of jumbles of a relatively small number of characters (examples include “gwugr” and “rgw”) and you need to find which of a longer sequence (like “guuggwbugbrrwgwgrwuburuggwwguwbgrrbbguugrbgwugu”) can be constructed out of repetitions of some combination of these patterns. In the context of the problems, the long sequences are “towels” (read the problem description) and consist of any number of “stripe patterns” (hereafter referred to as patterns). Some towels will not be constructable out of the given stripe patterns. Part1 was to simply determine which could, and which could not.

It’s the kind of problem that is not particularly well suited for humans, but which has been studied extensively for decades. The theoretical framework is called “regular expressions” or “Discrete Finite Automata”. When I took a compiler class back in the mid 1980s, we wrote our own lexical analyzer generator that used techniques which were (from memory) well described in the classic text Compilers: Principles, Techniques and Tools by Aho, Sethi and Ullman, aka “the Dragon Book”. I still have my copy.

Anyway, regular expressions. Python (my chosen dagger which I use to approach all the Advent of Code problems) of course has a well developed regular expression library, which I thought might be a clever way to solve Part 1. In less than 12 minutes, I had this code:

#!/usr/bin/env python

import re

# data was in the format of a big chunk of small patterns,
# followed by list of towels we need to construct.

data = open("input.txt").read()

# find the patterns, and build a regular expression

patterns, towels = data.split("\n\n")
patterns = patterns.split(", ")

# "any number of any of the patterns, consuming the entire string."
regex = '^(' + '|'.join(patterns) + ')+$'

print(regex)

h = re.compile(regex)

c = 0
for design in towels.rstrip().split("\n"):
    if h.match(design):
        c += 1

print(c, "designs are possible")

It’s simple, and it works. Because it leverages the “re” standard library, it might be expected to run quickly, and it does. On my cheap HP desktop I got from Costco, 18ms is enough to process all the input data and give the correct answer. I made one minor mistake, forgetting to add the caret at the beginning of the regex and the dollar sign at the end to make sure the entire string, but twelve minutes in and I had completed Part 1. Not the fastest (I was the 1600th to complete this phase of the puzzle, which isn’t bad by my standards) but I thought credible.

And here’s where I think I went wrong. In part 2, we are asked to determine the number of possible ways that each towel could be constructed, and add them up. My code in Part 1 was pretty useless for figuring that out. And because I am a smart guy with tons of experience and I’ve read the Dragon Book and the like, as well as Russ Cox’s excellent summary of the theory of fast pattern matching and perhaps it was the late hour (back in graduate school I made a rule not to write code after 10 pm, because my code would always seem stupid the next morning when I had coffee) but I embarked upon a complex, pointless, meandering exploration where I tried to recall how maybe I could use the same sort of techniques that I used oh-so-many years ago to solve this efficiently. I’ll outline basic notion, to tell you just how deep in the weeds I was. I didn’t get this to work, but I do think it is “sound” and could have been made to complete the second path.

Basically, the notion is to use a discrete finite automata to do the heavy lifting. (I’m doing this from memory, so please be patient if I don’t use the right or most precise terminology). Basically, we are going to generate a state machine that does all the heavy lifting. To make it somewhat simpler, let’s look at the basic example from original problem description:

r, wr, b, g, bwu, rb, gb, br

brwrr
bggr
gbbr
rrbgbr
ubwu
bwurrg
brgr
bbrgwb

When we begin, we are in “state 0”, or the start state. We will transition to other states by reading a character from the input, and then transitioning to different states (as yet undetermined). If we end up in the end state, and there is no more input to be processed, then we match, otherwise, we don’t and return failure.

We can build our transition table by looking at the “first” and “follow” sets (I didn’t have the Dragon book in front of me, but here is the idea):

At the start of input we are in State 0. If we look at the first character of any of the patterns, we see that they can be ‘r’, ‘w’, ‘g’, ‘b’. If we see a ‘u’, then we have no match, and would go to a “fail state”, and return zero.

But let’s say we are processing the input “brgr”. We are in State 0 or S0, looking at a character “b”. Let’s create the state S1. S1 would be “transitioned from the start state, and saw a b”. What are the possible patterns that we could still be in? We could have matched “b” pattern, and be back in a state where all the possible patterns are possible. Or, we could be in the “bwu” pattern, having read the b. Or we could be in the “br” pattern. Let’s say that S1 means “we could have just completed the “b” pattern, or we could be in the “bwu” pattern, or we could be in the “br” pattern”.

Now, what transitions happen after S1. Well, we create a new state for each of the possibilities. If we completed the B pattern, we are back into a state which looks identical to S0, we we may as well use it. If we read a w, we know we can only be in the state where the “bwu” is active. Let’s ignore that for now. If we read an “r”, we could have matched the “r” pattern. Or we could have completed the “br” pattern. So, we create another state for that…

This is the rabbit hole I fell down. I am pretty sure it could be made to work, but it was complicated, and I wasted way to much time (see “sunk cost fallacy” for why that might happen) trying to figure out how to make it work, including how to track the number of different ways each pattern was matched.

It was dumb.

In hour three, I lost the train of thought, and wrote the following simple code to test my understanding of the problem and how it behaved on the simple test data.

def count(t):
    cnt = 0
    if t == '':
        return 1
    for p in patterns:
        if t.startswith(p):
            cnt += count(t[len(p):])
    return cnt

It burns through the simple test data in about 20ms. Of course I tried it on the real data and…

Well, it runs for a long time. In fact, it’s pretty much exponential in the length of the towels, and the large number of patterns doesn’t help either.

But suddenly I heard harps, and angels, and light shown down on me from above, and the seraphim all proclaimed how stupid I was. Ten seconds and a two line change later, I had the answer.

#!/usr/bin/env python

# okay, we can't leverage the re library this time..

data = open("input.txt").read()

patterns, towels = data.split("\n\n")

patterns = patterns.split(", ")

towels = towels.rstrip().split("\n")

from functools import cache

@cache
def count(t):
    cnt = 0
    for p in patterns:
        if t == '':
            return 1
        if t.startswith(p):
            cnt += count(t[len(p):])
    return cnt

total = 0
for t in towels:
    cnt = count(t)
    print(cnt, t)
    total += cnt

print(total)

I think if I had half the knowledge I’ve accumulated over the years, I would have solved this immediately. But instead I solved it 2h46m in, netting me a ranking of just 6847. An opportunity squandered.

Had I stepped back and actually looked at the simplicity of the problem, and remembered the use of caching/memoization (which I considered earlier, but without clarity of thought) I would have seen it for the simple problem it was.

Four decades of programming in, and I’m still learning. And re-learning. And un-learning.

Happy holidays all.

I’m dimwitted, or Day 13 of the Advent of Code challenge…

December 13, 2024 | My Projects | By: Mark VandeWettering

As part of my daily puzzling in December, I’ve been engaged in the Advent of Code 2024 challenge. This is the kind of thing that sane people only do when prepping for job interviews (which I suppose I could be) but I do more for fun, in some hope that I’ll buoy up my ego a bit as well by proving that “I still got it.”

For the first few days, I didn’t even realize that there was a competitive element to this challenge, but it turns out that they keep a kind of leaderboard, and award points for first 100 people who solve a particular 2 part problem. Given that a few tens of thousands of people seem to be engaged in this activity, I have suspected that my chance of scoring even a single point is vanishingly small, but for the last couple of days, I decided to give it a whirl with my best efforts right at 9:00PM when the next days puzzle is released.

You can read the puzzle description for day 13 on their website.

Here is where I went awry, right from the very start. I read this:

The cheapest way to win the prize is by pushing the A button 80 times and the B button 40 times. This would line up the claw along the X axis (because 80*94 + 40*22 – 8400) and along the Y axis (because 80*34 + 40*67 = 5400). Doing this would cost 80*3 tokens for the A presses and 40*1 for the B presses, a total of 280 tokens.

This sent me down a complete rabbit hole, which took me the better part of an embarrassing four hours (and a night’s sleep) to rectify. I had convinced myself that there were potentially multiple solutions to this, and therefore I needed to treat it as a Diophantine equation. And, because I’m that kind of guy, I resorted to playing around with it on that basis using Python’s sympy.solvers.diophantine module.

I eventually solved it using that library, but because it is kind of a complicated module, it took me a lot of time, and had a lot of false starts and rabbit holes.

It’s simpler, way simpler than that. And I’m positively idiotic for considering otherwise.

Each potential move of the crane game arm moves the arm to the right and up. If we call the amounts that the Button A moves a_x, a_y, and for Button B b_x, b_y, then we can think of each button push as a vector. If you think of these as independent vectors, you will realize that it doesn’t matter what order you press the buttons. Let’s say we do all the A buttons first, and all the B buttons. If we end up at the target t_x, t_y after all that, then we win.

All the A steps occur along a line starting at the origin. We can reverse the direction of all the B steps, but begin at the target location. These are lines. The “A” line has slope a_y / a_x, and the B line has slope b_y / b_x. The A line passes through the origin and the B line passes through the target.

Clearly, unless they are coincident, they can only cross at a single point. There is no “optimization” because there can only be a single solution. If they intersect at a point on the integer lattice, then the game has a solution. Doh.

It’s still convenient to use sympy to avoid having to do algebra by hand, and cancel out stuff on paper and transcript it into code, but it’s not rocket science even if you had to do it by hand.

Spoiler: here’s my revised code to do it.

#!/usr/bin/env python

import re

data = """Button A: X+94, Y+34
Button B: X+22, Y+67
Prize: X=8400, Y=5400

Button A: X+26, Y+66
Button B: X+67, Y+21
Prize: X=12748, Y=12176

Button A: X+17, Y+86
Button B: X+84, Y+37
Prize: X=7870, Y=6450

Button A: X+69, Y+23
Button B: X+27, Y+71
Prize: X=18641, Y=10279""".split("\n\n")

data = open("input.txt").read().rstrip().split("\n\n")

def parse_data(d):
    d = d.split("\n")
    m = re.match(r"Button A: X\+(\d+), Y\+(\d+)", d[0])
    ax, ay = m.group(1), m.group(2)
    m = re.match(r"Button B: X\+(\d+), Y\+(\d+)", d[1])
    bx, by = m.group(1), m.group(2)
    m = re.match(r"Prize: X\=(\d+), Y\=(\d+)", d[2])
    tx, ty = m.group(1), m.group(2)
    return int(ax), int(ay), int(bx), int(by), int(tx), int(ty)

coins = 0

from sympy import solve, symbols, Eq

def mysolve(d):
    global coins
    ax, ay, bx, by, tx, ty = parse_data(d)
    tx += 10000000000000
    ty += 10000000000000

    # let sympy do all the heavy lifting

    a, b = symbols("a, b", integer=True)
    eq1 = Eq(a * ax + b * bx, tx)
    eq2 = Eq(a * ay + b * by, ty)
    sol = solve((eq1, eq2))

    if sol != []:
        coins += 3 * sol[a] + sol[b]

for d in data:
    mysolve(d)

print(coins)

If nothing else, it did make me dust off my knowledge of the simple bits of sympy, but I feel like an idiot. Note to self: don’t read too much into the wording of puzzles like this, they may be designed to mislead as much to illuminate.

Happy Holidays to all.

An hour of Meshtastic traffic on the Bay Area Mesh…

December 9, 2024 | My Projects | By: Mark VandeWettering

This will be a bit of a rambling technical ride on a particularly nerdy topic, so buckle up (or bail out now while you still can.)

I’ve been interested in Meshtastic for quite some time. It promises to be a decentralized network that allows users to create a mesh network which is independent of any kind of internet/cell service, and exchange short text messages using inexpensive radios that they access from WiFi or Bluetooth, usually from an application which runs on their cell phone (but which does not utilize the cell network).

The underlying technology is based upon a radio technology called Lora which uses spread-spectrum technology to permit low-power, long range radio communications. Nodes are connected in a mesh network, so that intermediate nodes can forward messages across multiple hops. It uses license-free radio spectrum in the 900Mhz band in the United States.

The hardware that can connect you to this network is pretty cheap, ranging from the low end of just under $10 to about $30 or a bit more for fully integrated systems like the Lilygo T-Deck which includes a keyboard, and means you don’t even need to access it via a cell phone app.

I’ve got a variety of different options, including the Xiao S3 above, Heltec V3 modules and a T1000-E from Seeedstudio.

I live in the SF Bay Area, where a local regional mesh is organized via bayme.sh/. It’s hard to judge how many nodes are active (more on this later) but there are live maps available on the Internet which show that it probably numbers around 250 nodes or so at any given moment. This makes it seem like a pretty active group, and I’ve attended a few live presentations at the Maker Faire and at some of the hacker groups in the area.

But here enters my problem, if you examine the map on the right, you’ll see that there are a lot of nodes down in the flats surrounding Emeryville and Oakland, and even a couple in Concord. But I live in the area labeled “Pinole Valley Watershed”. One thing that I haven’t mentioned is that the frequencies which are used by the Meshtastic network are pretty much strictly line of sight. If there is a clear path (no trees, buildings, or mountains) then the Lora radios can easily travel for kilometers at very low power. But if you have any obstructions, then it becomes impossible to establish any kind of link. And of course that’s presents a problem for me.

I live in one of the many little basins that dot the Pinole Valley Watershed, rather near the northern end of the San Pablo Dam reservoir. Thus establishing a line of site to any of the existing sites is pretty much impossible.

I had a similar problem when I became interested in another wireless network technology, the AREDN network which uses more conventional WiFi hardware on licensed ham radio bands (perhaps I’ll write up more about my experiences with that at some point). But AREDN allows you to “tunnel” traffic across a normal Internet link. This is viewed as suboptimal: clearly one of the purposes of AREDN (and Meshtastic) is to create networks which are independent of any commercial infrastructure. But the AREDN community generally views such things rather pragmatically: it’s hard to get people to invest time, money and effort into creating nodes which extend the range and availability of the network without getting them linked in, and for people like myself, that’s hard to do over RF links. Internet linking allows people in remote “islands” to participate in the network, and can give some incentives to building out the network while growing the expertise and enthusiasm of individuals.

Meshtastic (or perhaps more properly, the Bay Area group) takes a dimmer view of internet linking, even though it is possible. The Meshtastic firmware isn’t a full TCP/IP stack (like AREDN) but it allows messages to be transmitted to an MQTT server, and nodes can even accept messages from such servers and broadcast them over RF. But in general, the latter is entirely discouraged, for a variety of reasons:

  1. It is viewed as “less pure”, and generally just thought of rather dimly.
  2. The Internet clearly has a much higher capacity than the underlying RF links, and so it is possible for an Internet feed to flood messages onto the RF network, making it impossible to exchange messages on the RF local nets.
  3. This can be exacerbated by having lots of poorly configured nodes. Often people configure nodes as ROUTER (gateway) nodes which would be much better configured as CLIENT (edge) nodes.
  4. There is simply a lack of really practical information to help people. While documentation exists, it’s not written in a way which really helps people understand the functioning of the network.

Whew, this is getting long.

On to my current tinkering.

While I am still working on getting some RF nodes working, I thought I might look at statistics of tracking the traffic on the bayme.sh MQTT server. This is the node which feeds the various mapping efforts and the like, and in theory should give me a good view of the traffic that are occurring across the peninsula. To do this, I wanted to write some Python code to subscribe to that server, and decode/aggregate information about that traffic it hears. It was a bit more complicated than I had hoped, but meshtastic does have a Python module which was helpful, and extracting basic information from that, I tinkered the following together:

#!/usr/bin/env python

import sys
import re
from inspect import getmembers
import paho.mqtt.client as mqtt
import json
import base64
import textwrap
from meshtastic.protobuf import mqtt_pb2, mesh_pb2, portnums_pb2, telemetry_pb2
from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes
from cryptography.hazmat.backends import default_backend


app_dict = {
        portnums_pb2.ADMIN_APP : 'ADMIN_APP',
        portnums_pb2.AUDIO_APP : 'AUDIO_APP',
        portnums_pb2.DETECTION_SENSOR_APP : 'DETECTION_SENSOR_APP',
        portnums_pb2.IP_TUNNEL_APP : 'IP_TUNNEL_APP',
        portnums_pb2.MAP_REPORT_APP : 'MAP_REPORT_APP',
        portnums_pb2.NEIGHBORINFO_APP : 'NEIGHBORINFO_APP',
        portnums_pb2.NODEINFO_APP : 'NODEINFO_APP',
        portnums_pb2.PAXCOUNTER_APP : 'PAXCOUNTER_APP',
        portnums_pb2.POSITION_APP : 'POSITION_APP',
        portnums_pb2.POWERSTRESS_APP : 'POWERSTRESS_APP',
        portnums_pb2.PRIVATE_APP : 'PRIVATE_APP',
        portnums_pb2.RANGE_TEST_APP : 'RANGE_TEST_APP',
        portnums_pb2.REMOTE_HARDWARE_APP : 'REMOTE_HARDWARE_APP',
        portnums_pb2.REPLY_APP : 'REPLY_APP',
        portnums_pb2.ROUTING_APP : 'ROUTING_APP',
        portnums_pb2.SERIAL_APP : 'SERIAL_APP',
        portnums_pb2.SIMULATOR_APP : 'SIMULATOR_APP',
        portnums_pb2.STORE_FORWARD_APP : 'STORE_FORWARD_APP',
        portnums_pb2.TELEMETRY_APP : 'TELEMETRY_APP',
        portnums_pb2.TEXT_MESSAGE_APP : 'TEXT_MESSAGE_APP',
        portnums_pb2.TEXT_MESSAGE_COMPRESSED_APP : 'TEXT_MESSAGE_COMPRESSED_APP',
        portnums_pb2.TRACEROUTE_APP : 'TRACEROUTE_APP',
        portnums_pb2.UNKNOWN_APP : 'UNKNOWN_APP',
        portnums_pb2.WAYPOINT_APP : 'WAYPOINT_APP',
        portnums_pb2.ZPS_APP : 'ZPS_APP'
    }

# Replace with your actual MQTT broker address, port, and credentials
MQTT_BROKER = "mqtt.bayme.sh"
MQTT_PORT = 1883
MQTT_USERNAME = "meshdev"
MQTT_PASSWORD = "large4cats"
MQTT_TOPIC = "msh/US/bayarea/#"  # Subscribe to all Meshtastic topics

default_key = "1PG7OiApB1nwvP+rz05pAQ==" # AKA AQ==

# Replace with your encryption key (if using encryption)
ENCRYPTION_KEY = b'your_encryption_key'  # 32-byte key (e.g., generated with Fernet.generate_key())


def on_connect(client, userdata, flags, rc, properties=None):
    if rc == 0:
        print(f"CONNECTED")
        client.subscribe(MQTT_TOPIC)

def decode_encrypted(mp):
    try:
        kb = base64.b64decode(default_key.encode("ascii"))
        nonce_packet_id = getattr(mp, "id").to_bytes(8, "little")
        nonce_from_node = getattr(mp, "from").to_bytes(8, "little")
        nonce = nonce_packet_id + nonce_from_node

        cipher = Cipher(algorithms.AES(kb), modes.CTR(nonce), backend=default_backend())
        decryptor = cipher.decryptor()
        db = decryptor.update(getattr(mp, "encrypted")) + decryptor.finalize()
        data = mesh_pb2.Data()
        data.ParseFromString(db)
        mp.decoded.CopyFrom(data)
    except Exception as e:
        print(f"DECRYPT FAILURE: {e}")


port_dict = dict()

def on_message(client, userdata, msg, properties=None):
    global wrapper, port_set
    try:
        se = mqtt_pb2.ServiceEnvelope()
        se.ParseFromString(msg.payload)
        mp = se.packet
        if mp.encrypted:
            decode_encrypted(mp)

        if not mp.HasField("decoded"):
            return

        print(mp)

        port_dict[mp.decoded.portnum] = port_dict.get(mp.decoded.portnum, 0) + 1

        if mp.decoded.portnum == portnums_pb2.TEXT_MESSAGE_APP:
            print("TEXT_MESSAGE_APP")
            try:
                tp = mp.decoded.payload.decode("utf-8")
                for l in str(tp).split("\n"):
                    print(f"    {l}")
            except Exception as e:
                print(f"PROBLEM DECODING TEXT_MESSAGE_APP {e}")
        elif mp.decoded.portnum == portnums_pb2.NODEINFO_APP:
            print("NODEINFO_APP")
            info = mesh_pb2.User()
            try:
                info.ParseFromString(mp.decoded.payload)
                for l in str(info).split("\n"):
                    print(f"    {l}")
            except Exception as e:
                print(f"PROBLEM DECODING NODEINFO_APP {e}")
        elif mp.decoded.portnum == portnums_pb2.POSITION_APP:
            print("POSITION_APP")
            pos = mesh_pb2.Position()
            try:
                pos.ParseFromString(mp.decoded.payload)
                for l in str(pos).split("\n"):
                    print(f"    {l}")
            except Exception as e:
                print(f"PROBLEM DECODING POSITION_APP {e}")
        elif mp.decoded.portnum == portnums_pb2.ROUTING_APP:
            print("ROUTING_APP")
            route = mesh_pb2.Routing()
            try:
                route.ParseFromString(mp.decoded.payload)
                for l in str(route).split("\n"):
                    print(f"    {l}")
            except Exception as e:
                print(f"PROBLEM DECODING ROUTING_APP {e}")
            sys.exit(0)
        elif mp.decoded.portnum == portnums_pb2.TELEMETRY_APP:
            print("TELEMETRY_APP")
            telemetry = telemetry_pb2.Telemetry()
            try:
                telemetry.ParseFromString(mp.decoded.payload)
                for l in str(telemetry).split("\n"):
                    print(f"    {l}")
            except Exception as e:
                print(f"PROBLEM DECODING TELEMETRY_APP {e}")
        # we could do more
        # but for now...
    except json.JSONDecodeError:
        print(f"Error decoding JSON: {msg.payload}")
    except Exception as e:
        print(f"An error occurred: {e}")

    port_dict_view = [ (v, k) for k, v in port_dict.items() ]
    port_dict_view.sort(reverse=True)
    for v, k in port_dict_view:
        print(f"{app_dict[k]} ({k}): {v}")
    print()


wrapper = textwrap.TextWrapper()
wrapper.initial_indent = "    > "


client = mqtt.Client(protocol=mqtt.MQTTv5, callback_api_version=mqtt.CallbackAPIVersion.VERSION2)
client.username_pw_set(MQTT_USERNAME, MQTT_PASSWORD)
client.on_connect = on_connect
client.on_message = on_message

client.connect(MQTT_BROKER, MQTT_PORT, 60)

client.loop_forever()

It’s not perfect, or elegant, but embodies some of the very basics you’d need to receive and decode messages from the MQTT. I let it run for an hour late on Sunday night, and it recorded 791 messages to the MQTT server. They were broken down into six different “applications”, with the following distribution.

NODEINFO_APP (4): 318
TELEMETRY_APP (67): 281
POSITION_APP (3): 148
NEIGHBORINFO_APP (71): 28
MAP_REPORT_APP (73): 13
STORE_FORWARD_APP (65): 3

NODEINFO_APP packets are used to transmit information about individual nodes. Of the 318 sent, 96 distinct nodes were logged. Information for a (randomly chosen) node might look like this:


NODEINFO_APP
    id: "!336a194c"
    long_name: "Capt Amesha"
    short_name: "CA"
    macaddr: "d\3503j\031L"
    hw_model: HELTEC_V3
    role: ROUTER

Scanning the log, it appears that 42 of such nodes are configured as having a role of ROUTER, which seems… interesting.

Further scraping that information yields the following breakdown on types of hardware used:

     94     hw_model: RAK4631
     54     hw_model: HELTEC_V3
     42     hw_model: T_ECHO
     34     hw_model: STATION_G2
     33     hw_model: TBEAM
     12     hw_model: HELTEC_WSL_V3
     10     hw_model: HELTEC_V2_1
      8     hw_model: T_DECK
      8     hw_model: LILYGO_TBEAM_S3_CORE
      6     hw_model: HELTEC_WIRELESS_PAPER
      5     hw_model: TLORA_T3_S3
      4     hw_model: PORTDUINO
      3     hw_model: TRACKER_T1000_E
      2     hw_model: SEEED_XIAO_S3
      2     hw_model: DIY_V1
      1     hw_model: TLORA_V2_1_1P6

Not sure what to make of this data yet, but at least I can gather it and use it to think more about the state of the network and how its configured, including total number of messages sent and the locations of nodes, with the possibility of finding misconfigured nodes.

I probably will continue to tinker with this, most basically by creating an sqlite database to log all this information and allow more general queries over time.

And I will probably work on getting a solar powered node at the top of the hill on my property, in the hope of creating a resource for my neighborhood.

All for now, hope you all are having a good December. Feel free to comment if you have any hints/suggestions.

Another 3D stamp: QR codes

November 20, 2024 | My Projects | By: Mark VandeWettering

After tinkering with making a 3D stamp yesterday, I thought that maybe I would tinker together a stamp for the QR code that would send you to my resume-ish site mvandewettering.com. I had used the qrcode library in python to generate them before, but it wasn’t clear to me how to use that to generate a vector format that would merge well with my 3d printing workflow. In the end, I decided to just generate the image using the qrcode library, and then construct a series of rectangular prisms for every location which was dark. I thought it would be rather simpler to just write out the individual polygons in ASCII STL format directly. STL can only directly handle triangles, so I output 2 triangles per face, resulting in 12 triangles for each rectangular prism. The main code looks pretty straightforward.

with open("qrcode.stl", "w") as f:
    print("solid qrcode", file=f)
    for y in range(img._img.height):
        s = None
        for x in range(img._img.width):
            p = img._img.getpixel((x, y))
            if p == 0:
                if s is None:
                    s = x
            elif p == 255:
                if not s is None:
                    output_rect(s, x, y, y+1, f)
                    s = None
        if not s is None:
            output_rect(s, x, y, y+1, f)
    output_rect(0., img._img.width, 0., img._img.height, f, minz=-1, maxz=0)
    print("endsolid qrcode", file=f)

The output_rect code is tedious, but not especially complicated.

def output_rect(minx, maxx, miny, maxy, f, minz=0, maxz=1):
    print("facet normal 0 0 1", file=f)
    print("outer loop", file=f)
    print(f"vertex {minx} {miny} {maxz}", file=f)
    print(f"vertex {maxx} {miny} {maxz}", file=f)
    print(f"vertex {maxx} {maxy} {maxz}", file=f)
    print("endloop", file=f)
    print("endfacet", file=f)

    print("facet normal 0 0 1", file=f)
    print("outer loop", file=f)
    print(f"vertex {minx} {miny} {maxz}", file=f)
    print(f"vertex {maxx} {maxy} {maxz}", file=f)
    print(f"vertex {minx} {maxy} {maxz}", file=f)
    print("endloop", file=f)
    print("endfacet", file=f)

    # ... five more similar copies of the code to handle the other five faces

It took me a few trials to get the triangle ordering consistent and the normals specified. It’s not clear to me from any documentation that I had what “proper” means, but I found that specifying vertices in counter-clockwise order as viewed from the outside seemed to work well. Early versions confused CURA substantially, even when it loaded it properly. But eventually I got the following image, colored yellow which is apparently what CURA uses to indicate the model is “good”. I made sure to flip the image left to right so that I could use it as a stamp.

I sized it to about 1.5 inches, and printed it again in PLA. I gave it a light sand with 220 sandpaper. I haven’t printed a proper handle for this so I plunked it down on an ink pad and just pressed it into place.

Yeah, it isn’t quite good enough to actually work. I think a little additional sanding might help, as well as just getting a better inkpad. I am also wondering whether giving a small cylindrical deformation would make it easier to ink, as the pressure would be concentrated. I’ll tinker with this some more when I’ve had coffee. I suspect that benefitting from some other people’s experience in doing this would be good, so some youtube viewing is probably in my future. I also want to try using TPU instead of PLA to see how that would work as well.

I also could try to use a qrcode mode which is larger/has higher amounts of error correction. But I kind of want to keep this reasonably small to make it convenient and easy to use.

Hope you all are having a good day!

3D printing an ink stamp, or “Welcome to Stampy Town, Population Five!”

November 19, 2024 | 3D printing | By: Mark VandeWettering

Apologies to Hermes Conrad.

Further apologies to those who won’t get this Futurama quote.

During COVID, I spent some time in my shop doing more woodworking. At the time I was trying to figure out how I could sign the work that I did, mostly for fun rather than ego (my woodworking skills remain modest at best.) I had read a number of articles online where people designed logos using 3D printing, and had them printed in metal, which I thought was pretty cool. At the time I used OpenSCAD to design a simple logo of my initials, and sent it off to China for 3D printing. Several weeks later I got it back. It was one inch in diameter, and had a very simple version of my initials.

I had originally cast it with a post on the back, with the idea that I would use a die to cut a screw thread on it, and then I would make a holder for it. I found that the sintered metal that I chose didn’t hold the thread very well, so to use it I actually just hold it in some pliers and heat it with a torch. It takes a bit of practice to get the heat just right, but it has worked pretty well. I did find that sanding the front surface helped a bit, but the sharpness is good and I was overall pleased with it. I have thought about submitting a revised version with the screw thread modeled directly into the metal, but haven’t gotten back to doing that, and it is a pretty low priority.

But in the mean time, I seem to have lost the original OpenSCAD file that I used to generate the model. Yesterday I thought I should try to recreate it. Rather than using OpenSCAD again, I thought it would be better to use OnShape, which has become my go-to for designing objects for 3D printing. I began by inking the original metal stamp using an ink pad, and stamping it onto some paper and scanning it. I then loaded that as a reference into SCAD and used it to take some measurements and reconstruct the outlines. It took me about twenty minutes to come up with the model for it.

The new model is pretty close to the original, but includes a number of chamfers and fillets that were not part of the original. I went ahead and 3D printed the disk in the PLA I had loaded, and was pretty happy with the quality, even though I used 0.2mm layer height (the “fast” presets in Cura).

I had originally thought that I should print this in TPU, which would have been a more flexible filament, and therefore which I thought would more closely match the hard rubber that is commonly used for stamps. A little bit of reading suggested that PLA might be a better choice, as it is easier to get detail without stringing.

I needed a handle to help hold this disk. Since I had OnShape fired up, I went ahead and made a quick little handle that I could use that would hold the disk centered. This was the first time I had used sketched curve profiles and the like, and rather than making it as a full surface of revolution, I chose to flatten both sides. This has two purposes: it makes it easier to print without any supports, and it allows you to align and orient the stamp better to make sure its at the angle you desire.

I printed this in the same PLA. I then took some cheap super glue and put the disk in place, trying to orient it the best I could. In version 2, I will probably print some registration bumps to align it even better, but this was a first test.

Initial attempts were pretty spotty, but I got some #220 sandpaper and gave it a bit of work to make it more level. I do wonder if maybe hitting it with some filler primer and then sanding it down might be a good approach, but a few minutes of work, and it improved to the point where I consider the effort a success.

It’s worth experimenting with, and (if you have a 3D printer) costs very nearly nothing. I’ve been thinking of experimenting more with using 3D printing to do other kinds of prints, but this is a good start.

Hope you all are having a good day!

Felines win in the battle between astrophotography and cats…

November 13, 2024 | My Projects | By: Mark VandeWettering

I posted this picture of my little friend Patchouli to the Facebook Seestar group, who decided to settle into the case for that smart telescope.

She got way more hearts and comments than any of the astrophotographs that I posted to the same group over the last few months. I guess I should not be surprised: she’s positively adorable with her little pink nose and pink paws. Perhaps I should abandon my nerdy and scientific endeavors if my goal is to “drive engagement” or “build a community.”

Okay, that’s not going to happen, but it does perhaps lend some perspective to the world. And frankly, it gives me a little bit of hope.

Strategies for coping with problems…

November 9, 2024 | My Projects | By: Mark VandeWettering

I’ve found that there are three basic strategies that have helped me in the past. They are probably not comprehensive, or even the best, but they are pretty simple to remember, and cover more situations than you might think.

I categorize them as Plan, Act, and Ignore.

Perhaps the most productive and generally the best is to plan. You see that in the future there is some issue that you will be facing, and you develop a plan so that when the anticipated outcome happens, you already know what you will do and what the likely outcome will be. This is good because it can help stifle the anxiety of uncertainty. If the hurricane strikes, you know you have food, a backup generator, and an evacuation plan. You put away savings for a rainy day. You perform maintenance on your house and your car. Then, when these things happen, you don’t need to develop a plan at the last minute. You know what you will do, and what the likely outcome will be. You’ve worked to minimize stress and danger to yourself.

The problem with planning as a strategy is that it is predicated on you actually understanding the problem and its likely probability. The world is very complicated, and it is difficult to balance all the possibilities, and develop plans to cover all contingency. You prepared for a hurricane. But a lot of the damage from the recent hurricane Milton was caused by the tornades that struck ahead of the actual hurricane. If you plan for something, you may be ignoring some other risk that turns out to be somewhat surprising. Planning is most effective for the predictable risks in life. Spending a lot of time planning for low-probability or unforeseen risks or events can be pointless and exhausting.

So, the second strategy: you act. In this case, something unforeseen or even unforeseeable has happened, and you need to do something. In cases like this, you may not have a plan, or at least not a complete plan. You need to rely on your resources (intellectual, financial and emotional) to find a course which minimizes damage to yourself and those who are important to you in light of new situations and information. Reusing the previous example: perhaps the hurricane course shifts in the last 24 hours, and your planned place of evacuation is no longer safe. If you are lucky, you may have foreseen this possibility and know several alternatives, or can quickly search for alternatives. When you act, you quickly draw on the best information you have, and chart the best course that you can see as quickly as you can.

I originally thought of this strategy as reaction. But in trying to clarify my own thinking about this, I found that the term implied the kind of thoughtlessness implied by the phrase “knee jerk reaction”. One of my personal mantras is “act, don’t react”. Reaction is the lizard brain attempt to cope with problems, with little analysis or conscious thought. Reaction is the “flight or flight” response that we have, which admittedly is often an effective survival strategy. I don’t mean to denigrate it, in fact, it often can save your life. But if you have a moment, it is usually good to ask “am I just reacting to new information, without actually understanding it or considering it, and do I have time (even limited time) to consider a different course of action?” If so, then action may be the strategy that makes sense.

Lastly, you could simple choose to ignore the problem. This sounds bad. Ignoring problems means they don’t get solved, and unsolved problems can pile up and cause you greater problems in the future. This doesn’t seem like a strategy at all.

But the thing is that the human mind (and certainly my own mind) has a near infinite capability for worry. Worry and stress can have significant negative effects on your body and well being. All this planning and action takes significant energy and resources, and can keep you from relaxing or enjoying what’s going on. Ignoring problems can be a valuable skill, particularly when the problem is not amenable to either planning or action.

As a for instance, years ago my mother was in failing health. I knew that she was going to die within months. I had long term plans for how I was going to cope with the financial practicalities of her care. I also took regular actions to call her daily and to fly up to visit her at regular intervals. But I had to cope with the fact that she’d have good days, and long days. There were times when I was called to act when she had a particularly bad turn.

But no amount of planning or action was going to prevent or even delay the course of her illness. I was concerned about her every day. If I had allowed myself, I could have been concerned every minute of every day. So I adopted a different strategy: I chose to ignore the problem.

That sounds bad, so let me explain my process. My internal monologue basically was “Mom is ill, and will probably die at some point. Is there any plan I could engage in that will stop or delay this negative outcome? (No.) Is there any action that I could take now that will help? (No.) Then I am going to choose to ignore this problem. I am going create a mental closet that holds this problem. I am going to take the worry and angst that I feel, and the ineffectiveness that I feel in not being able to fix things, and I am going to put that problem in the closet. Most importantly, I am going to reopen this mental closet at some specific time in the future (tomorrow, Monday, a week from now) and open that mental door, and relook at this problem, and decide whether there is some additional plan or action could be helpful. Perhaps I will choose to put the problem back in the closet again. Perhaps some new information or change in the situation will make a new plan or action beneficial, and will provide me with a new course of action.”

I don’t believe that you should ignore problems indefinitely. My internal conscience and basic decency would not have allowed me to abandon my mom and simply stop thinking about her. But the ability to carve out some space in my life when I am not pointlessly and unproductively worrying about her was essential to my mental health, if not my survival. One needs to have space to experience all emotions, not just the flight or flight responses that we’ve evolved.

At first, this “mental closet” seems really difficult. But I found that practice, and the assurance that I wasn’t just ignoring problems, that I was postponing worry to gather space and strength, and would address them more productively if my situation changed made it easier as time goes on. In fact, the ignore strategy is just a variation of the plan strategy.

A great number of my friends and family are experiencing angst as the result of the uncertain political climate, as well as numerous other more personal changes in their lives. I get it. I’m right there with you. I decided to write this post mostly to clarify my own thinking. I suspect I’ll be dusting off these strategies with greater frequency in the coming weeks, months, and years. But while I may have been motivated by my concern about the political situation here in the United States, they apply to many sorts of problems. I do not claim them to be particularly universal, but I offer them in the hopes that you might find some part of them useful or encouraging in your own lives.

And remember: when the ship is sinking, put on your life jacket first. If you don’t care for yourself, you won’t be able to care for others. Self care is part of caring for others.

Best wishes, and be kind to yourself and others.

Practical uses for 3D printers…

November 9, 2024 | 3D printing | By: Mark VandeWettering

I’ve been an on-and-off enthusiast for 3D printing for quite some time, but in the early days, it wasn’t what I would call “practical”. They used to be fairly unreliable. In particular, my aging Creality CR-10 had difficulties with bed leveling, and while I kept modifying it to add sensors like the BL-Touch to automate that process, at some point I simply got fed up with it and let it sit. But technology kept increasing, and there are some new consumer level printers made by companies like Bambu Labs which are faster and employ lidar and other fancy bits of tech to print faster than ever before.

As yet, I don’t have one of those. But rather than continue to tweak my CR10, I decided last year to buy a Elegoo Neptune Pro 3, which I got on a Black Friday sale for around $250. And it was much better than my Creality CR10. It’s bed leveling just works, and I’ve done dozens of prints with the only real failures being stupidities of my own. And the quality is quite good.

And, it’s reliable enough that I can design and print parts without taking an entire day. For instance this week I had an issue where I wanted to fix a window that the previous owner had literally screwed shut (presumably as a security measure). I needed to open that window today for maintenance, but he had driven these self tapping screws in very close to the edge, and I couldn’t get a wrench or even a nut driver in very easily to take them out. I didn’t want to replace them in the same way, so instead I designed and printed these little clips using OnShape, a free and web based parametric design software that I recommend to anyone. It may not be quite as capable as Fusion360, but the ability to design parts using any web browser (I use Chrome on both Windows and Linux machines) and have models always available is pretty handy.

Anyway, I designed this part with chamfers and better clearances after taking a few measurements, and printed them in some ASA filament I had lying around, which should be more UV resistant than others. These look tidier and were less annoying than just screwing through the window frame. And should I ever need to back them out, there is enough clearance to just use a nut driver to back them out. I printed four of them in just 25 minutes, taking almost no filament. Problem solved.

Another thing I like to use 3D printing for is to make things like lens caps. I have this old pair of German WWII aircraft spotting binoculars.

They are beasts, but very comfortable to use for astronomy, with large eye relief, adjustable inter-ocular distanc, and a sturdy tripod. I often use them to view lunar eclipses and the like. But what they lack is dust caps. So, the other day I dusted off my OnShape skills and took some measurements, and quickly generated this lens cap model:

I had some white TPU filament, which is quite flexible and which I have used to make a dustcap previously for my 6″ f/4 Newtonian that I made decades ago. The model is a very simple capped cylinder, with a chamfered rim around the edge to add some thickness for enhanced sturdiness. The chamfer also makes them slide on very easily, even though the fit is fairly tight. That means that they don’t fall off very easily either. The TPU is flexible, and even though the thickness is only 1mm, they are incredibly sturdy: I think I would actually have to work fairly hard to tear them apart. I should note that TPU was hard to print on my old CR10, but I’ve had literally no failed prints using TPU, despite it being a very soft and flexible filament.

I mentioned this model to a friend of mine who said that he had long ago lost the caps for his pair of Nikon binoculars. I told him to send me the dimensions and I’d print him some. He lives up on Oregon, so I mailed them up to him, and he mailed me back this photo:

They apparently work perfectly. I didn’t have TPU in black, which would have looked nicer, but hey, they work and will be good at protecting his optics.

3D printing can be really valuable in creating custom items, even if (or maybe especially if) they are low value objects. Think about it this way: how much would you pay to get a new set of lens caps for a set of binoculars? Even $5 seems excessive, but you might do it if you knew they would fit your very special binoculars. But such a thing may not even exist/be available if your particular set is old or rare. Being able to create a version which actually fits for just pennies seems really cool to me.

None of this is very exciting, but I do feel oddly happy having done this.

I still need to take some measurements to do caps for the German binocular’s eyepieces, which have a tapered shape which is a bit more complex. I’ll probably get to that today.

Hope you all are having a good weekend.