Posts

Early Memories

My body was born 50 years ago. I know this not only because my parents told me, but also because the government decided its business includes tracking and certifying exactly when, where, and of whom people are born.

My body bears the scars and wear of 50 years of life. The first half of that period it was growing. Then it stopped growing and started down a predictable path of degeneration. (Presbyopia appeared within the last decade, followed by the beginnings of detectable osteoarthritis.)

My mind also bears the scars and wear of 50 years of life. I can trace paths backward in time, but my ability to do that is also only going to degrade as it ages. My memory (like that of most people) is not continuous. I have snapshots of varying length, clarity, and detail. Looking back in time is like turning around and finding most of the path you walked submerged in a dark body of water. The memory snapshots are like stepping stones that have not yet sunk from view.

How far back can I see?

My earliest memory is from just before I reached my second birthday. What I remember is looking out a window where we lived and seeing a snowplow a few houses up the street that was stuck. I was upset and wanted the truck to get help. I was placated when my father said he would go help it. (How can I be confident in this memory? The visual details in my mind are limited, but I am certain I was looking downwards and to the left and the plow was 3-4 houses up the street heading away. Cross-checking with my parents: at the time we lived on the second story of a house that had windows tall enough for a 2-year-old to see out. My father remembers the incident and agrees with the orientation and distance of the plow. Public records note two extreme blizzards in the month before my second birthday.)

Memories that I would call visually “complete” begin by age 4, which I can tell because they occurred around a house we left in my fifth year. Here are some of the more vivid ones:

Rattlesnake in the street: I was playing outside and my Mom ushered me into the house saying there was a rattlesnake and it was dangerous. I watched from a window as my Dad and five other men from the neighborhood formed a loose circle around it, right in the middle of the street. They were armed with shovels and I could see a few take jabs at the snake, but otherwise they were just hanging out, talking. I was a little confused because my Mom had made it seem like a serious threat, but it looked like the men were treating it as more of a social gathering. I finally got bored of watching so don’t know how that ended.

Want to learn violin? My Mom was lying on a bed reading a book. Out of the blue, she looked up and asked me if I wanted to take violin lessons. (What is a 4-year-old in the pre-computer-game era going to say: “No, I have too many other commitments?” Of course I’d like to try a new activity!)

You only get soda if you can do it right: A local kid had a 2-liter bottle of soda. He offered to share it with anyone who could drink without backwashing. I don’t remember the words he used to explain it, but basically if you could pour or sip without putting the whole opening in your mouth, you could drink; if not you only got to watch. He demonstrated, and then invited others to drink. I was so relieved when my lips proved coordinated enough to get soda. I felt bad for another boy failed and was denied.

Coding Agents Grow Up

For years, programming something new followed a predictable, exhausting rhythm: write some code, hit a wall, and then disappear into a forest of documentation and StackOverflow tabs to find the trick to get it working. In 2025, that era ended for me.

Today, AI code assistants take so much drudgery out of development and debugging that the work has become mostly gratifying and rarely frustrating — the opposite of how programming was in the Before Times.

Between my work and my side interests, I often feel like I live in Visual Studio Code (VSCode — a popular open-source development environment). Early in 2025 I subscribed to GitHub Copilot, which integrates AI coding assistants into VSCode. At $10/month it’s a phenomenal bargain that offers software developers an easy way to loop in the latest models from OpenAI, Google, and Anthropic. Now when I’m trying something new (like this little project I did for fun) I can mostly stay in VSCode and work with an AI assistant that has mastered all of the documentation.

As mentioned elsewhere, my most significant side project in 2025 was helping Ukrainians develop an open-source ballistic calculator (pyballistic) in Python and Cython. I polished that off at the end of September.  By that point I had begun to spend more time with Github Copilot, and the capabilities of its latest models gave me enough confidence and support to tackle what would have previously been an absurdly ambitious project for my day job: an Excel Real-Time Data (RTD) server for the Interactive Brokers API.  (This RTD server feeds live market data, positions, and orders directly into Excel using native Excel formulas.) After two months of working seven days a week on that, I had a beautiful piece of software so solid (and validated every build by over 800 unit tests) that I had begun to use it in live trading operations.

Early Childhood Development

Watching these models mature over the last year has been like watching a child grow up.

Tell a child, “Clean your room.” First they’ll spend more time arguing than it would take to just do it. When they finally declare the task “done,” you might find a few toys picked up but most of the mess still there. Emphasize that “clean your room” means everything and you might find the floor clean but everything shoved under the bed.

Claude v3 was notorious for hacking shortcuts. Ask it to fix a failing test and it might just replace the test logic with a “return true;” statement. Claude v3.5 wouldn’t be so brazen, but it was still prone to hack the example rather than the task. GPT-4 and Gemini v2 would enthusiastically announce completion without checking their work. Like the child who picks up two toys and concludes that his room must be clean, even though the mess is visible from outside the door.

The teens came quickly: Claude v3.7 and its contemporaries would often spend more effort arguing that its failure was actually success than it would have taken to do the work correctly.

More recent models have become more likely to keep checking and working until they succeed. Performance of the latest models is still wildly variable: a model that astonishes me with its apparent skill one day may choke on something relatively simple the next. But they are getting more consistent. And they are definitely getting more intelligent.

What is intelligence?  It becomes easy to see when you’re doing hard work with different models. One of the neat things about Copilot is that you can choose to watch the model at work. They all think “out loud,” meaning you can read their chain of thought to understand how and why they do things. When it’s not having an off day, Claude Opus is intelligent.  Given a problem:

  • It can more reliably identify what matters.
  • It has a better sense of what to look at and what to ignore.
  • It produces better assessments of what’s possible and makes better plans to get there.
  • It knows when to persist and when to change directions.

These are some of the things that separate a junior developer from a more experienced one. They are also qualities that characterize more intelligent people.

Let me show you. Have you ever wondered what it’s like debugging software? Well, debugging is one thing the newer models can usually do as well as a good human programmer. In fact, they can do it better because they can run the process faster and interact with the code more directly. Below I pasted a transcript of Claude working to find and fix a tricky bug in my RTD server. This could just as well have been a transcript of my thoughts if I had to debug it. But whereas this would have been a draining hour+ distraction for me, the Claude instance cranked this out in minutes.

Continue reading “Coding Agents Grow Up”

2025: The End of the Human Polymath

Born in 1976, I was just early enough to taste pre-internet life. In grade school, on a dial-up 1200baud modem and IBM PC, I was ahead of the curve in actually connecting to primordial pieces of the internet, though they didn’t have a lot of utility outside of academic collaboration. I grew up with a physical encyclopedia at home – the 22 volumes of the 1987 World Book Encyclopedia took up more than 3 feet of shelf space. I wondered how they decided what to include in those books, because I was mostly frustrated to find my subjects of interest barely grazed, if covered at all. In the mid-1990s Microsoft published a DVD to make the printed encyclopedia obsolete: Encarta, which somehow offered even less information but more data because it had “multimedia” – the buzzword for sound, video, and primitively interactive content.1

So what did a curious young mind do back then? There were so many more questions than answers. For a typical How or Why question, your local library might have a book containing an answer, but you’d have to physically visit the library, search their card catalog for books covering the subject, and then physically find potential matches on the shelves and thumb through each to see if it actually provided the details sought. You couldn’t reach out to experts because even if you could find their names you couldn’t easily find contact information. So you were stuck with whatever local adults happened to know. My Dad was very smart, and he had smart work colleagues who could go quite far in some areas of math and physics. What about teachers? Public school teachers were – despite their avowed profession – astonishingly underinformed. (That realization led me to despise them: I did not get along with public school teachers after 3rd or 4th grade when I discovered that, to any random question, I was more likely to have the right answer then they.)

It was a struggle to build a deeper-than-average understanding of the world. I put in a lot of work seeking answers to practical questions, and that gave me a lot of practical knowledge. I certainly had gaps: Pop culture, sports – many obsessions of the average person did not interest me, so I was never going to be a Jeopardy champion. But in the realm of practical and technical knowledge I was exceptional. I read slowly, but have an insatiable thirst for understanding how and why things work. Plenty of people idly wonder. I don’t just wonder: I search. When I had a random question and couldn’t quickly find the answer I would write it down, and eventually I would find an answer and absorb everything around it. Maybe the hunt is why the answers stick in my head.

Now, what search engines started, LLMs have so thoroughly finished that future generations are bound to forget that there was a time when knowing things was not only difficult but also useful.

“Know-it-all” was often thrown around as a pejorative. But, back in the dark ages of the late 20th century, extensive practical knowledge had real utility. It could make the difference between staring blankly at a problem (or not even recognizing the presence of a solvable problem) and jump-starting solutions by drawing on a deep well of understanding how other things work and how they could relate. A know-it-all2 is more likely to:

  • Recognize the absence or presence of a significant problem. (What is that sound, and should I get it looked at?)
  • Flag misleading or false assertions. (Could competitive chess players really burn thousands of calories thinking during a match?)
  • Explain what matters, when, and why. (When do you really need to change engine oil, and should you pay extra for synthetic?)

Even when search engines came along, the human polymath still had value. Answers were more accessible, but you still had to know the right questions. You had to know if a thing was a thing, what terms might apply, and what a correct answer should look like.

Today, it’s over. We have reached the singularity of convenience. This year, as they ironed out the chatbot propensity to hallucinate, the value of the human know-it-all evaporated. Yes, I still catch the bots making factual errors, but if you keep them talking they eventually notice the errors themselves.

I took pride in being the guy to ask, the guy with the notoriously uncanny breadth and depth of knowledge, the guy who – even if he didn’t have the answer off the top of his head – would likely find it faster than anyone else. “Have a practical question? Just ask me. Worst case: I don’t know. More likely: I’ll point you in the right direction.” Now? I tell people to ask the bots. There is no way I can give as quick and thorough an answer on as broad a set of topics as they can.


  1. What was I looking for? Something like a cross between Wikipedia and The Way Things Work. Here’s how I described it in a 1998 journal entry: The Practical Encyclopedia of Technology.  It would contain in applicable form all of mankind’s technological achievements—information I haven’t been able to find elsewhere, like how transmission mechanisms are actually implemented on vehicles, the composition and construction of TFTs, how ball bearings are manufactured.  Every article on a specific piece of technology would be of the following form:
    – Brief theory;
    – References to components (e.g., transmissions would reference ball bearings, metal casting, gears, lubricants);
    – Problems encountered in implementation;
    – Canonical solutions to problems, in sufficient detail to actually implement on that information alone;
    – Other solutions that have been tried, and why they haven’t caught on;
    – References to sources for theory on the subject;
    – Patent Office classification fields of the technology, etc.
    ↩︎
  2. The age of the literal know-it-all – someone who knows everything that is known in a society – ended centuries ago. At least in the developed Western world, that has been impossible since the early 1800s. The title may be hyperbole, but The Last Man Who Knew Everything describes a plausible contender for the title: Thomas Young, who died in 1829. ↩︎

Adobe’s Protection Racket

I just burned more than a day migrating my primary work computer from a machine running Windows 10 to a newer one running Windows 11. Not because I wanted to. Not because Win11 offers me anything I actually want (so far I hate every UI change from Win10). But because Microsoft has decided to end support for Win10 while preventing Win11 from running on older CPUs. And like everyone else whose work requires a secure operating system I’m being shoved along whether I like it or not.

This isn’t a trivial inconvenience. Over the last decade I’ve accumulated a small arsenal of development tools, libraries, and utilities — each with its own quirks, dependencies, and fragile installation paths. Migrating them is not a matter of clicking “Next” on a wizard. It’s a slog of registry tweaks, PATH surgery, license re‑entries, and the occasional ritual sacrifice to the gods of backward compatibility.

And just when I thought I had wrestled Windows 11 into grudging submission, Adobe decided to remind me that they can be even worse.


Adobe’s Perpetual License That Isn’t

I own a perpetual license for Lightroom 6. “Perpetual” is supposed to mean I can use it forever. The software runs fine on Windows 11 … except that Adobe has disabled it.

Adobe included one of the tedious “activation” processes in the Lightroom installation process that depends on their servers telling the software that my license is legitimate. And they have quietly shut down their activation servers, so now when I launch Lightroom 6 in Win11 I have discovered an endless loop of signing in, accepting the license agreement, and then having the software crash. To add insult to injury: Adobe makes no note on their website’s activation page that this process has been disabled for Lightroom 6. I only learned that it would not work after trying repeatedly and then asking Copilot what was happening.

This isn’t a bug. It’s a business model. Adobe has effectively disabled software that would otherwise continue to work. They’ve taken something I paid for outright and retroactively converted it into a hostage situation: either I cough up for their recurring subscription, or I lose access to the tools I already bought and the work I invested in using them to catalog and post-process more than 60,000 photos.

That’s not “end of support.” That’s a protection racket.


Why This Matters

This isn’t just about photography software. It’s about the erosion of implied contracts. We’re told we’re buying licenses, but too late discovering that those licenses can be revoked, crippled, or held hostage at the whim of the vendor. The “perpetual” in perpetual license turns out to mean “until we decide otherwise.”

For engineers, photographers, musicians — anyone who performs their work in specific software — this can be catastrophic.


Imagine you buy a plot of land from a real estate developer and build a house on it. Then one day you come home to find a gaping hole where your house used to sit. Eventually you find the developer and get the following explanation:

Sorry for the confusion: You bought the land, not the location. We moved your house and the land (i.e., the dirt) under its foundation to a new location.

Oh, and that new location is only available for rent. The monthly price? Well, if you have to ask, you’re not going to like it….

Hot and cold running water? Not in Phoenix!

During summer in Phoenix we don’t have luxuries like hot and cold running water. Instead we have hot and hotter water. This photo shows me measuring the “cold” tap’s water emerging at 102°F:

Thermometer showing cold tap water measuring 102°F
“Cold” tap water is 102°F

If you haven’t run water recently you might enjoy a few moments of water as cold as the indoor air. But during summer the water supplied by the city routinely breaks 100°F.

Is this because it spends its time baking in water towers? Surprisingly no: Phoenix stores potable water underground and uses variable-speed pumps to deliver it on demand. And the ground gets really hot: Next photo shows me measuring the temperature of pavement in early afternoon sun at 173°F. (This was with temperature in the shade running 115-120°F.)

Thermometer measuring pavement in direct sun at 173°F
Pavement in summer Phoenix sun measured 173°F

Light Interaction App

Check out this nifty little touch-screen-compatible, WebGL-powered application.

To test out the latest AI, I added GitHub Copilot to VSCode and asked it to build a simple web application that lets the user move three radiant lights (red, green, and blue) around a screen to see how adding colors works. (For example, if the three colors are right on top of each other it looks like a single white light.) Here’s a screenshot of that first app:

By default Copilot uses GPT-4o, but on a few examples I have found that Claude 3.7 Sonnet (another Copilot option) is capable of more sophisticated computer engineering, so with that selected as my Copilot “Agent” I began enhancing this app. The most significant change – and something I’ve wanted to try for a while – was to use WebGL to take advantage of the graphics processing features built into most modern electronics. Thanks to that hardware acceleration, this enhanced app supports lots of light sources, dithering to avoid color banding, and real-time dragging lights around the screen without noticeable lag. Then I added touch-screen support so that the app can be used on mobile devices.

It took some coaching from me to get this working: At several points I observed bugs and Copilot would essentially get stuck in a loop saying, “Oh, I see what’s wrong; this should fix it,” without successfully fixing it. I had to guide the Agent through more intentional debugging methods to resolve several confusing problems. But by the end I hadn’t written or even touched much of the code. I was the designer and tester, and Copilot saved me the trouble of:

  • Scouring API documentation and sites like StackOverflow for code samples needed to make it work.
  • Learning or remembering the exact syntax of the languages involved (WebGL, JavaScript, CSS, HTML).
  • Recreating common GUI tricks, like adding code to make sure that everything is visible on a screen regardless of its size or orientation.
  • Finding and fixing minor bugs.
  • Writing debug code to understand and resolve major problems.

Here’s a screenshot from the final app (shown here with all light inverted – one of the fun features accessible by right-clicking/long-tapping):

SpaceX Falcon 9 Flyby

I glanced out a window last night and saw this brilliant spectacle unfurl as the second stage of a Falcon 9 traversed the sky west of Phoenix at an altitude of 90 miles and ground speed reaching more than 10,000mph:

Since this was shortly after sunset, the exhaust plume was high enough to be illuminated by the sun from over the horizon. (Here’s SpaceX’s video and mission summary.)

Venus with Crescent Moon

Now in the first week of February 2025, Venus is approaching its peak brightness. Here is a picture of it near the waxing crescent moon while at a brightness magnitude of -4.8.

This makes it 23 times as bright as Mars, which had a magnitude of -1.4 in the photos following its lunar eclipse three weeks ago:

Mars emerges from lunar occultation, photo by David Bookstaber 20250113

Venus has almost twice the diameter of Mars (which itself has twice the diameter of the moon), and presently it is also only 75% as far from Earth as is Mars.

Martian Eclipse (Lunar Occultation)

Last night (13 January 2025), North American observers could see the nearly full Moon pass in front of Mars, hiding it for as long as an hour. I got some photos of Mars emerging on the other side:

As noted last week, right now Mars is relatively close to Earth and in nearly full phase, just like the Moon in these pictures, so we are seeing the entire “day” side of Mars. Mars is twice the diameter of the Moon, but presently it is more than 200 times as far from Earth.