In a few days we'll open-source something that means a lot to us, and our six months of using it and deploying it across environments has left me a little unsettled.
Something is happening in AI. To the users of it, to the things we build with these minds we made, to us as an industry. It's hard to figure out.
Instead of talking about what it is, perhaps it's easier to talk about legacy.
the universe is incompressible
Why is it that all code becomes legacy code? In all my time, there are two primary forces I've observed.
-
Humans prefer clean, simple abstractions. Our minds work best when we can understand the things we build, and so we're forced to forever fold increasing engineering complexity into almost the same upper limit of understanding. From transistors to boolean logic to gates and Karnaugh maps to VHDL, we're forced to continue trading control for simplicity, over and over again.
In simple terms, this means that the systems we use for engineering get 'better' over time, more abstracted and more powerful.
-
Humans are diverse. Take almost any opinion and you'll find someone that has it. Take almost any edge case and you'll find someone who fits it. Any successful system has to contend with the incompressible entropy of humanity.
When entropic systems meet increasing abstractions, you get messy results. Or what we love to call 'legacy code'.
sloppy fields of green
"Why don't we rebuild it" is a question every engineer has asked at least once, and has (if they live long enough) tired of hearing later in their career.
The answer is usually the same: because the one we have works.
This isn't usually satisfying, so here's the deeper reason: a system that has ossified into reliable operation over years embeds in its tree rings (rings of ugly code and conditionals) all of the learnings and diversity and ugliness encountered in production.
Rebuilding it, especially without minding Chesterton's Fence, usually results in a cleaner system that is doomed to repeat the same process.
right to repair our code
The good news is that modern agents are amazing at convincing us to rebuild things all the time.
The bad news is that they are amazing slop generators without the right guidance.
Left in loops, they can make us perfectly glued-together software that operates for the briefest of windows. "We can always remake it," we think, except each time we have no way to learn from the last time.
This was the problem we faced for six months at Southbridge. We kept building data pipelines, long-horizon task solvers, codebook generators and AI connectors, but all of them became impossible to wrangle past the same lines of complexity, needing to be rebuilt.
Hankweave was our attempt at forcing ourselves to forego things that we didn't think we needed. Things that were fun (like parallel agents or mountains of MCPs) but that made things impossible to maintain.
We wanted things that would stay built. That we could use, repair and improve, together.
We wanted to build systems of lasting value.
hankweave
We got there by trading manufacturing ease for ease of repair.
Today we have hanks that we've used and maintained for a long, long time, that incorporate knowledge from the things we've seen, the things we've done, the people that have helped us.
Today when something breaks with these hanks, or we learn something new, we don't reach for an empty folder to build things all over again. We know how to open them up (they want us to) and how to fix them in ways that make them stronger in the broken places.