Dreamscape

tl;dr:

  • The bizarre otherworldliness of dreams makes them seem foreign, almost as though they came from some weird place "out there", not of our own making.
  • This in turn makes recurring dreams puzzling, especially when they are separated by long intervals (sometimes years); if dreams come from "out there", where the heck is that and why can it store these recurring patterns and places stably over time?
  • These false intuitions are dispelled once we realize that our brains are all about storing patterns, and the same mechanisms that allow us to form memories and mental models are the ones that provide us with a stable pool of patterns from which we build recurring dreams.
  • This simple fact is occluded and obscured by the apparent forgetting that happens on waking.
  • This all seems bleedingly obvious in retrospect, and it makes one wonder why I even needed to write it down.

Over a period of many years now I've had a series of recurring dreams, or at least, recurring themes within dreams. When you're in the dream world, it seems detailed and real, yet at the same time unreal or surreal because of the way in which improbable or impossible things occur. You find places and people morphing from one into another in a way that seems to simultaneously escape your notice while also registering in way that causes you to remark on it later on. The laws of physics are defied. Rules of causality are suspended. Events are reordered and incompatible facts are juxtaposed. These bizarre ensembles of characters, interactions, and locations are so unexpected, so novel — even if they are stitched out of a patchwork of people and things you know or can imagine — that it is almost like they're being delivered to you from the outside, by a Christopher Nolan-esque cinematic auteur of unbounded and inimitable creativity.

The thing that has struck me about these recurrences is just how complicated the dream world seems to be. Your brain appears to synthesize these fantastical locations, not just randomly — like a procedural generator would create terrain in an open-world video game — but in a way that has permanence, because you can find yourself back in those same places months or years later.

At first this ability to recall these impossible, intricate places puzzled me, but I have a theory now about what makes them come back. I'm not going to get into how dreams get put together, because I honestly have no idea, but I want to explain how it is that these recurring patterns are more or less durably stored and made available for reuse over long periods.

So, let's start from the basics. How do brains work? Let me give you my horribly imprecise understanding of this, hopefully one that is just vague enough to be compatible with how things actually work in there. Brains are massive networks of interconnected neurons. Synapses fire. Signals are transmitted between neurons. There are activation thresholds that dictate whether signals get through. Importantly for the purposes of this discussion, brains are not merely deciding machines that control the systems and actions in the organism that they inhabit; they are dynamic, evolving, self-modifying receptors and recorders of information and patterns. That's how memories are made: the act of "recording" a memory is in some sense a rewiring of the brain in such a way as to capture a pattern and make it available for latter retrieval.

There is a curious difference between memory and memorization. We form memories effortlessly all the time as experiences wash over us. In contrast, memorization is an intentional, effortful act aimed at creating a memory for later recall. Perhaps frustratingly, this latter activity can be quite hard. But the automatic formation of memories is as easy as breathing; in fact, it's something we can't help but do. This is not to say that our memory is infalible. Details may escape us, memories may fade (become harder or imposible to access), facts may be switched. But in the absence of dementia or other pathologies, the brain is a marvelously flexible, capacious, and impressively reliable reservoir of information.

A particular kind of memory is the "mental model". We typically use that word to describe things that are a little more abstract that a vanilla memory. For example, we might refer to a mental model of "how the economy works", or "what Alice thinks about Bob". But we also have spatial models of the environment around us — the neighborhood we live in, for example — and these are much more akin to memories, in the sense that we acquire them and build them up automatically, without even intending to, by the mere fact of experiencing the environment around us.

And this memory formation, this rewiring of the brain, it becomes more and more accurate, detailed and durable as we repeatedly move through that environment, in a sense "wearing in" the connections in a way that makes them stronger and more complete. (As an aside, this is the power of positive thinking too — and the harm of negative thought patterns — because when we repeatedly activate the same neural pathways we "burn in" in a way that makes it all the more likely that we'll fall into following the same pathways again in the future.) After a while, your mental model of the area you live in becomes so detailed and comprehensive that you can name and visualize countless details about it, large and small, involving distances, textures, smells, relationships, and all manner of patterns and symbols.

I think that it's this that's happening when you visit places in dreams. Your brain is conjuring up these fantastic places, making a pastiche of experiences, patterns, places, and symbols — both lived or perhaps experienced in fictional worlds of movies, TV shows, video games, comics, magazines — by some mechanism that I haven't even begun to understand. But regardless of the source for this material and its combinations, it winds up leaving an impression on your brain. It leaves a mark, not quite the same as a waking memory, but a nearby cousin to it. Something is recorded in your brain somewhere. Neurons are rewired. Patterns are inscribed. Symbols are persisted.

The fact that we usually have trouble remembering our dreams when we wake up obscures all the pattern-recording that's actually going on behind the scenes. I don't know the mechanism for this amnesia-on-waking either, but I feel sure that it is only a surface-level illusion. The reason why we revisit places and relive patterns in recurring dreams is because we accumulate a corpus of stored material, a set of "mental models", in our brains that's very close to the kinds of structures we use to store other memories. The dreamscape only seems otherworldly; actually, it's made of the same stuff from which we build our internal representations of the real world. And now that I've written all that out, I'm rather embarrassed that it took me this long to figure out.

Thoughts on AI — 2026 edition

It's a couple years since I wrote about AI1 and things have been changing fast. In 2024, AI had started to impinge on my day-to-day work, but 2025 felt like the year in which it changed things dramatically. While I still would characterize a lot of my work with AI as consisting of "arguing with LLMs", there were definitely times when they produced results of acceptable quality after a few rounds of revision, feedback, and adjustment, and the net time and effort required felt like a relative win compared to doing it all myself.

New tools

The biggest change came in the form of the arrival of Claude Code. Instead2 of just chatting with an LLM (mostly using my Shellbot fork), I could now delegate to it as an agent without having to abandon my beloved $EDITOR3. What began as experimentation (figuring out what can this thing do?) has since turned into an integral part of my workflow: even for changes I could very quickly carry out myself, I will instead turn to Claude and ask it to make the change, unless it is utterly trivial (ie. the threshold for cutting over to Claude is the point at which it can manually manipulate the text faster than I can).

New capabilities

2025 brought customization mechanisms, Model Context Protocol4, subagents, skills, and custom slash commands among other things. From my point of view, these all have the same goal, namely, equipping agents with:

  1. Specialized knowledge that enables them to obtain the information they need; and:
  2. The means to carry out necessary actions in service of an objective; while simultaneously:
  3. Not overflowing the context window with garbage which obscures things and prevents the agent from producing a correct result.

Collectively, these are probably more important and useful than improvements to the models themselves. Speaking of which...

New models

2025 brought model sicophancy to the forefront, and Claude was no exception. Around mid-year, Claude's "You're absolutely right!" was ringing in the ears of users across the world in an almost continuous chorus. Thankfully, it seems to have subsided a bit now.

I didn't follow the whole model benchmarking question very closely, and am in general only interested in how well the models improve my experience in my daily work. Overall, subjectively, I'd say that the models improved significantly over the last year, but as I said before, I believe that it's the tooling around the models that had the greater impact.

Use cases

In my last post, I said that LLMs were good for "low-stakes stuff like Bash and Zsh scripts for local development", "React components", "Dream interpretation", and "Writing tests". In 2025, I used them for a lot more than that. I used them for fixing bugs, adding features, working across multiple languages and services, and for explaining foreign codebases to me.

Where they shine:

  • Places where there are a lot of guard rails in place to provide them with clear feedback about success (eg. working in a strongly typed language like Rust, or in an environment where you can perform automated verification in the form of linting or tests).
  • They're also great in places where you may not even know where to start but their ability to quickly search large corpuses and repos can rapidly locate leads for you to follow.

Places where they still leave much to be desired:

  • Things where the non-determinism of their output means that you can't trust the quality of their results. For example, say you have a change that you want to make uniformly across a few hundred call sites in a repo. Your first instinct might be to say, "This is a repetitious change, one that should be amenable to automation, and if the LLM can be given clear instructions that allow it to do it correctly in one place, then it should be able to do it quickly and correctly in all 100 places". Sadly, this could not be further from the truth. LLMs are inherently non-deterministic, and that means there's always a random chance that they'll do something different on the 19th, 77th or 82nd time. You will have to check every single modification they make, and you may be far better off getting the LLM to create a separate, deterministic tool to carry out the work. And if you want to throw caution to the wind and have the LLM make all the changes for you anyway, you're probably better off firing off the agent in a loop, with a clean context for every iteration and a clearly stated mechanism for verifying the correctness of the change, than expecting a single agent to carry out any significant amount of work serially.
  • Anything that can't be trivially described with a minimum of context. This is a conclusion that I've recently come to. In the past, I thought that bigger context windows would endow the models with the ability to solve fuzzier problems, the kinds that humans are particularly good at (with their ability to take into account disparate sources of information scattered across time and place). But my experience with even relatively small amounts in their context (ie. far less than 200K tokens), is that models can easily "overlook" salient information that's "buried" in the context, even when it's not that large. Failure modes include things like telling the model to look at a series of commits, and then observing how it "forgets" something critical in the first of the series; it proposes a change that looks like it only actually attended to the most recent tokens in its context window, and often ends up contradicting itself, or reimplementing a decision that it previously reverted. My suspicion is that when we have models that have 10-million-token context windows, we'll still get the best results when we distill down everything we want them to "know" into the first few thousand tokens.

Job security

In 2024 I said that I wasn't worried about AI taking my job in the near term, but that things could change quickly, and I advised to "judiciously use AI to get your job done faster". In 2026, AI has clearly gotten to the point where it is making real waves in tech workplaces. Not only is AI making it possible for people to ship more code and faster than before, there is also considerable business pressure to make use of in in the name of maximizing productivity. Unfortunately, the signal here is very noisy: our corporate overlords can mandate the use of these tools and monitor their use, but I don't think we have reliable evidence yet on how much of this is unalloyed value, and how much of it is technical debt, latent regressions, and noise masquerading as productivity.

Now more than ever it seems important to not only use the machines to deliver useful work, but also to focus on the places where I as a human can still deliver value where a mere next-token-predictor cannot. The pressure on both of those fronts is only going to increase. I'd say that my feeling of "precariousness" is quite a bit stronger now than it was two years ago, and I'm not looking forward to seeing that trend continue although I feel that is surely must.

In terms of job satisfaction, I've observed an inverse correlation: the more my job consists of me prompting the AI to do things for me, the less intrinsically satisfied I feel. This was one of the reasons why I had so looked forward to Advent of Code in December; I was itching to do some significant work with my own two hands. I look now towards the future with some dread, but also with a determination to not go gently into that good night: no matter what happens, I want to commit to finding things to take authentic pride in beyond "how I got a swarm of agents running in parallel to implement some set of inscrutable artifacts".

The impact on the world more generally

So far, I've been talking about how AI has affected my job. But "Gen AI", in particular, is having the expected effects on the wider world. Deep fakes, AI slop, and bot activity more generally are flooding YouTube, Twitter, and anywhere content is shared5. It seems that we're already well on the way into a "post-truth" world, where our ability to distinguish fact from falsehood has been devastatingly damaged, with no prospect of putting the genie back in the bottle, given the inevitably increasing capabilities of AI systems to produce this stuff at ever higher levels of quality.

One can hold out, clinging to reliable information sources, but in the end it seems unavoidable that these will be islands of truth surrounded by oceans of worthless, endlessly self-referential fabrication. I shudder to imagine what this looks like when you fast-forward a hundred years, or even ten...

  1. I wrote that piece in March 2024, so just a couple months shy of two years ago, to be precise.

  2. Maybe I should say "as well as" rather than "instead" because I still do chat with the LLM a fair bit when I want to ask it general questions about something in the world; but when doing almost anything related to coding, I almost exclusively do that via Claude Code.

  3. Technically, I am "abandoning" it in the sense of switching focus to another tmux pane, but Neovim continues running and I can dip in and out of it whenever I want.

  4. MCP nominally arrived in 2024, but as it required folks to actually build MCP servers, I think it's fair to say that it "arrived" in a tangible way in 2025.

  5. I almost wrote "where humans share content", but that's already appallingly misleading.

Lockout horror stories

Amazon

My account was banned recently because, years ago, I ordered two paper books that Amazon said would be split into two shipments. Both books arrived without any issues, but later Amazon refunded me for one of them, claiming that one package never arrived. This happened 4–5 years ago. Apparently, during a recent review, they decided this counted as fraud and banned my account. As a result, I can no longer log in and lost access to all my Kindle e-books. They also remotely wiped my Kindle, so my entire library is gone. I appealed the decision, but I've been waiting for over six months with no resolution.

icqFDR on Hacker News, 2025-12-19

A friend of mine received a double shipment for a $300 order. Being honest, he contacted customer service to arrange a return. Everything seemed fine until a few days later when he noticed they had also refunded his original payment. He reached out again to let them know, and they said they'd just recharge his card. Apparently, that transaction failed (no clear reason why), and without any warning, they banned his account, wiping out his entire Kindle library in the process.

egeozcan on Hacker News, 2025-12-19

Apple

My Apple ID, which I have held for around 25 years (it was originally a username, before they had to be email addresses; it's from the iTools era), has been permanently disabled. This isn't just an email address; it is my core digital identity. It holds terabytes of family photos, my entire message history, and is the key to syncing my work across the ecosystem.

The only recent activity on my account was a recent attempt to redeem a $500 Apple Gift Card to pay for my 6TB iCloud+ storage plan. The code failed. The vendor suggested that the card number was likely compromised and agreed to reissue it. Shortly after, my account was locked.

I effectively have over $30,000 worth of previously-active “bricked" hardware. My iPhone, iPad, Watch, and Macs cannot sync, update, or function properly. I have lost access to thousands of dollars in purchased software and media.

Paris Buttfield-Addison, 2025-12-13

(For additional context, see Daring Fireball and TidBITS.)

Google

Google strategically avoids the crush of users by offering little in the way of direct customer service. My calls to Mountain View HQ landed me in a labyrinth of recorded messages that inevitably led to one of a man, sounding only slightly less exasperated than I felt, shutting me down with a "Thankyougoodbye."

A few minutes into my Google-less existence, I realized how dependent I had become. I couldn't finish my work or my taxes, because my notes and expenses were stored in Google Drive, and I didn't know what else I should work on because my Google calendar had disappeared. I couldn't publicly gripe about what I was going through, because my Blogger no longer existed. My Picasa albums were gone. I'd lost my contacts and calling plan through Google Voice; otherwise I would have called friends to cry.

Living in the Bay Area, I have a fair number of Googler-friends, but the Googleplex has apparently grown so vast that none of them had any idea where to start.

In case you're wondering, in the end, I was fortunate. By Monday, a Googler filed the right internal escalation paperwork on my behalf and on Tuesday morning, six days after I lost access to my account, relayed that it had been restored.

My data was intact save for the last thing I'd worked on–a spreadsheet containing a client's account numbers and passwords. It seems that Google's engineers determined this single document violated policy and locked down my entire account. My request to get that document back is still pending.

I returned to the Google fold with eyes wide open to my responsibilities as a user. In relationship terms, I am no longer monogamous. I store my data on other servers maintained by providers like Evernote, Dropbox, and WordPress, and the cloud is my standby, not my steady. I've swapped convenience for control: I back up my email and what I care about most on physical hard drives.

Tienlon Ho, 2013-04-22

Advent of Code 2025 →

All blog posts