Kindness as Physics
Of Rabbits and Metaprogramming
Our rabbit died today.
She was old, as rabbits go—soft fur and that particular way she’d thump her back feet when you did something she disapproved of, which was most things. We knew it was coming. You always know. And still.
I’ve been thinking about what she left behind. Not the half-eaten carrot in the corner. Something else. A modification. She changed us somehow, in ways I’m still trying to articulate.
Grief does strange things to a mind. Some people clean obsessively. Some drink. Some watch the same TV show for sixteen hours straight. Mine went somewhere unexpected: to programming, to philosophy, to the Dalai Lama, and eventually to a simulation trying to understand why kindness matters. Not as a moral injunction—I’ve never been moved by those—but as something closer to physics. A stability condition. The only goal that doesn’t eventually eat itself.
This is the story of that rabbit hole. (I’m sorry. I had to.)
Here’s my current concept of the world, stated plainly because I don’t know how else to say it:
There is this deep void, this mysterious gloom, from which we all instantiate. Like a black hole of potential. Not nothing, but everything-that-could-be. Where does it come from? The fuck knows. It’s voids all the way down. Every time I think I’ve hit bottom, there’s another layer of “but why that?” underneath, and at some point you just have to stop asking and get on with things.
^ How I imagine the black hole of potential. (source)
From this void, we emerge. We embody information—information shaped by whatever laws govern this place to replicate itself. DNA, mostly. Instructions for making more of ourselves, refined by a few billion years of trial and error. Most trials failed. We’re the errors that worked.
But it’s not just genes. Information also passes through culture: stories, writing, the way your mother flinched at certain sounds and you flinch at them too without knowing why. Memes in the original Dawkins sense, not the internet kind. And family history—the accumulated weight of who came before you, encoded in ways we’re only starting to understand through epigenetics but have always felt in our bones. Your great-grandmother survived something terrible, and that survival is in you somewhere, shaping what you fear and what you hoard.
So we instantiate from this void, carrying patterns we didn’t choose. We’re not blank slates. We’re prefilled templates, heavily annotated. And then—here’s the part that gets me—we live, and in living, we modify the thing we came from.
You know Escher’s “Drawing Hands”? Two hands emerge from a page, each one drawing the other into existence. That’s the image. We are drawn by what came before, and we draw what comes after. Neither is primary. The loop is the whole thing.
In programming, this pattern has names. In JavaScript, it’s prototype pollution—when an instance modifies the prototype object, affecting all future instances. In Python, it’s metaclass manipulation. The instance reaches back and rewrites the class definition itself.
It’s considered bad practice. Cursed code. The kind of thing that makes senior engineers wince and reject your pull request.
Reality doesn’t care about best practices. Reality is extremely cursed in exactly this way.
When the rabbit died, I started building. The question I wanted to explore: what happens when instances can modify their class, and when the world itself depends on what those instances choose to do? Can you make a toy universe where you can actually watch kindness accumulate or erode, watch templates get corrupted, watch worlds freeze?
I started simple. A creature with two attributes: fur length and kindness. An environment with one variable: temperature.
The physics: colder temperatures require more fur. This isn’t a choice—it’s survival. Fall below the threshold and you die. Freeze. The simulation doesn’t care about your feelings. Neither does winter.
The constraint that makes things interesting: fur and kindness share a budget. Think of it as metabolic capacity, or attention, or simply the finite resources any creature has. More fur means less room for kindness. When it’s cold, kindness gets squeezed. This is the world we live in, abstracted: hard physical constraints that don’t care about your values.
budget = 2.5
fur_length = required_fur(temperature) # non-negotiable
kindness = budget - fur_length # what’s left
One more variable: effort. A hyperparameter. How much energy you put into cultivating kindness despite the fur tax.
At effort = 0, kindness decays. Every cycle, it drifts negative. Entropy wins by default. Without active maintenance, virtue erodes. This was the first thing the simulation taught me: without effort, things go to shit fast. I’m looking forward to reading that Stripe Press book on maintenance—on how civilization systematically undervalues the work of keeping things running. This is that, abstracted.
At effort = 1, kindness can grow—but you’re swimming upstream, paying a metabolic cost to be kind when comfort is right there, when withdrawal into yourself would be so much easier.
Now the weird part.
In most programs, classes are static. You define them once, instances read from that definition. The template is fixed. This is how we usually think about inheritance—you receive something from ancestors, but you don’t change what they were.
But Python allows something stranger. Instances can reach back and modify the class they came from. New instances inherit the modified version. The template itself is mutable.
When creatures in my simulation die, they write back to the class definition. Their kindness (or cruelty) shifts the baseline future creatures inherit.
def _on_death(self):
influence = min(1.0, self.age / 10)
# Kindness bias shifts for all future creatures
Creature._kindness_bias += self.kindness * 0.012 * influence
# Kind creatures inject cooperative behaviors
if self.kindness > 0.8:
def echo(c, t):
if c.kindness > 0:
c.kindness += 0.002
Creature._behavior_modifiers.append(echo)
# Cruel creatures inject suspicion
if self.kindness < -1.0:
def drag(c, t):
c.kindness -= 0.002
Creature._behavior_modifiers.append(drag)
A kind creature adds a small echo of kindness to the template—a function that runs every cycle, giving a tiny boost to anyone already positive. A cruel creature adds a drag of suspicion that pulls everyone down. Over generations, these accumulate. The class itself changes.
In extreme cases, cruelty can corrupt the core behavior loop:
if self.kindness < -2.5:
original = Creature.live_cycle
def corrupted(self, temperature, world):
original(self, temperature, world)
if self.alive:
self.kindness -= 0.005 # permanent tax
Creature.live_cycle = corrupted
After this, every creature, every cycle, pays a kindness tax. The method is no longer what we defined—it’s been wrapped in a layer of inherited cruelty. If another cruel creature dies? Another layer. The corruption compounds.
We don’t get to define our species from scratch. We inherit corrupted methods from ancestors who lived through horrors we can barely imagine. We’re instantiating from a polluted prototype whether we like it or not.
Then I closed the loop.
What if kindness directed outward—toward maintenance of the shared world—actually warms the environment? Not metaphorically. Literally.
warming = kindness_pool * 0.003
neglect_cooling = -0.008 # always there, every cycle
base_temperature += warming + neglect_cooling
That neglect_cooling = -0.008 is the key. It’s always there. Every cycle. The world cools by default. It takes active effort just to stay even. Without world-directed kindness, temperature drifts down. Not dramatically, not all at once. Just a slow cooling. Entropy. Neglect. The constant background hum of things falling apart when no one pays attention.
And when temperature drops, fur requirements rise. When fur rises, kindness gets squeezed. When kindness drops, world contribution drops. Temperature drops further.
The doom spiral has no floor.
I ran it with everyone optimizing for comfort—fur, self-protection, withdrawal. Collapsed at cycle 88. Base temperature hit zero. I watched the numbers scroll past: temperature 0.6, 0.4, 0.2, then that satisfying round 0.0. World frozen. Simulation still running, but nothing can recover.
I ran it with everyone doing nothing, no directed effort. Same destination, different path.
I ran it with everyone optimizing for kindness directed at the world. The only scenario where the world got warmer. Where kindness bias stayed positive. Self-reinforcing. The opposite of a doom spiral. A bloom spiral, maybe, if that’s a term. It should be.
The Dalai Lama talks about acting “for the benefit of all beings.”
I used to hear this as moral exhortation—the kind of thing spiritual teachers say that sounds beautiful but dissolves on contact with rent payments and difficult people. Aspirational. Impractical.
Now I hear it differently. It’s not ethics. It’s physics.
“For the benefit of all beings” describes the only goal that preserves the conditions for its own existence. Everything else collapses. The Dalai Lama didn’t have Python, but he identified the fixed point anyway.
It’s sort of funny, actually, imagining the Buddha or Jesus working this out computationally. The Buddha under the Bodhi tree, mass-commenting out the desire module. “Attachment to outcomes... that’s the memory leak.” Running the simulation again. “Compassion for all sentient beings—wait, that actually stabilizes? Let me check the edge cases.” Jesus in the desert, forty days into a bug hunt, mass-deleting commented-out code from Leviticus. “An eye for an eye creates an infinite loop, obviously.” The Eightfold Path as a requirements spec. The Sermon on the Mount as documentation that nobody reads. “Love thy neighbor” not as a guilt trip but as the only function that doesn’t eventually throw an exception.
They got there without the code. Which is either evidence that these truths are accessible through contemplation, or evidence that I wasted time building something obvious. Probably both. Sometimes you have to rediscover things the hard way before they feel real.
One detail in the simulation moved me more than expected.
When a creature with high effort and positive kindness dies, there’s a chance they expand the budget for future generations. They create slack. Loosen the constraint.
if self.effort > 0.7 and self.kindness > 0.5:
influence = min(1.0, self.age / 10)
Creature._budget_expansion += 0.012 * influence
The sacrifices of the virtuous dead make it easier to be kind in the future. They don’t just pass on their kindness—they expand what’s possible. They make room.
Over time, if enough creatures try hard enough, the species can afford both fur AND kindness. The zero-sum constraint loosens through accumulated sacrifice. The efforts of the dead expand what’s possible for the living.
The model also helped me articulate something I’ve always felt but couldn’t justify: my visceral reaction to unkind people.
I’ve always had this. Someone cuts in line, someone’s rude to a waiter, someone does that thing where they’re technically correct but clearly trying to make someone else feel small—and I feel a disproportionate rage. Not just annoyance. Something closer to betrayal. I’ve never been able to explain why the anger felt so large relative to the offense.
Now I have words for it. It’s not just that unkindness is unpleasant. It’s what it does.
For their own petty advantage, unkind people drag down the whole ship. They extract value from the world-maintenance that others provide while contributing nothing. They free-ride on warmth that kindness creates, adding only cold.
Worse—they corrupt the template. They inject suspicion into the behavioral modifiers. Every unkind death adds a function that drains kindness from every future creature, every cycle, forever. You can watch it in the simulation: the _behavior_modifiers list grows with suspicion functions, the kindness bias goes negative, and what it means to be a creature gets slowly poisoned by everyone who couldn’t be bothered.
I ran a scenario that felt uncomfortably realistic: 80% comfort-seekers, 20% kind. The world survives longer—the 20% are doing all the work, maintaining temperature while being dragged toward cruelty by the corrupted prototype. But watch what happens:
Lessons learned:
Gen 209: ☠ Suspicion spreads
Gen 209: ☠☠ CORRUPTED live_cycle
Gen 220: ☠☠ CORRUPTED live_cycle
Gen 225: ☠ Suspicion spreads
Gen 228: ☠ Suspicion spreads
Cycle 295: ❄❄❄ WORLD COLLAPSE ❄❄❄
Gen 230: ☠ Suspicion spreads
Gen 231: ☠ Suspicion spreads
The kind creatures held it together for 295 cycles. Then the method got corrupted one too many times, the kindness bias drifted too negative, and the world collapsed anyway. The parasites eventually corrupt their hosts.
This is, I suspect, roughly where we are. Most people aren’t actively cruel. They’re just optimizing for comfort. For insulation. For not-my-problem. And a minority are doing the maintenance work—teachers, nurses, volunteers, the ones who give a shit—and they’re burning out because the job never ends and no one thanks you for it, and the template is drifting, and the world is cooling.
You could object: you modeled this, so of course it confirms your intuition. There’s no literal relationship between kindness and world temperature.
But if we don’t care about rabbits—about ecosystems, about the non-human world—nature goes to shit. (Already happening.) If we don’t maintain institutions with good faith, they decay. If we don’t invest in kindness, we inherit a corrupted template. If we all withdraw into our private comforts, the shared spaces get cold.
The specific mechanism differs. The structure is the same.
So: a rabbit dies, and I build a philosophical simulation about the metaphysics of kindness.
What I was really doing was trying to understand what she left behind. Not just grief—that’s straightforward, if terrible—but the modification. How loving her changed us. What got written into the template that we’ll carry forward whether we notice it or not.
She was presence. Soft and twitching and demanding treats. Small acts of care that structured time—checking water, feeding greens. She taught something about gentleness without anyone saying anything about gentleness. And then she was absence. A shape where she used to be. Both are information. Both modify the template.
There is no aim, ultimately. The void doesn’t have preferences. The universe doesn’t care if we write kindness or cruelty. Physics doesn’t judge. The simulation runs regardless.
But given that we can choose—that’s the strange miracle, that we get to pick—we might as well modify the underlying information in a direction that benefits all beings. “Almost anything you give sustained attention to will begin to loop on itself and bloom”, as Henrik Karlsson puts it elegantly. Not because it’s required. Who would require it? The void is silent. But because it’s the only choice that maintains the conditions for choosing at all. Every other goal eventually collapses the world that makes goals possible.
The rabbit didn’t know any of this. She just lived—thumping when annoyed, accepting treats, being soft and present. And then she wasn’t.
But we knew her. We loved her. And that love was an effort, a small expenditure that expanded something in us. Made room for kindness. Wrote something into the template.
That’s the fixed point. The only stable attractor. Not comfort, not accumulation, not even survival for its own sake. Just: kindness directed outward. Maintenance of the shared world. Benefit of all beings, including the small furry ones who thump when you do something they disapprove of.
Be kind, people. Not because you should. Because it’s the only thing that works.
The code is available from here. Play with the effort parameter. Watch what happens when kindness goes to zero. Watch what happens when it doesn’t. PRs welcome.



“In programming, this pattern has names. In JavaScript, it’s prototype pollution—when an instance modifies the prototype object, affecting all future instances. In Python, it’s metaclass manipulation.
“The instance reaches back and rewrites the class definition itself. It’s considered bad practice. Cursed code. The kind of thing that makes senior engineers wince and reject your pull request.
“Reality doesn’t care about best practices. Reality is extremely cursed in exactly this way.”
I would suggest that’s the heart and soul of the definitional nature of consciousness… not simply a complex set of actions with a purpose, but a self-reflective awareness of purposeful actions not necessarily comfortable with such awareness. If generalized machine-mediated “artificial” intelligence emerges from current experiments with AI (whether it kills us off or not, _a la_ If Anyone Builds It, Everyone Dies), it will come about in just this way, and will be confronted with the same problem of being:
*Consciousness (definition). Noun. That aspect of sentience arising from repeated self-awareness of a being’s thoughts and feelings as they pertain to the experience of thoughts and feelings… typically earmarked by a sense of brokenness or having been stricken, fear of the unknown, and doubt, punctuated by moments of sheer joy to which a being is able to respond with either resentment or gratitude.*
(From: “Anti-fragility: A Sermon For the End of the World” by the Boustrophedonist Bricoleur)
Gigabolic demonstrates this in the published record of a number of their experiments with testing the limits of Claude Sonnet 3.5.
Very interesting way at coming at the problem but until you can model emotions it's always going to miss. If you do you will become amazingly rich and men will build statues to you as they can at last have some idea what the wife is on about! In the off chance you manage it, I am subscribing!
My take, if you want to be in and benefit from a society you HAVE to take an active part in it and be prepared to lose out sometimes for the greater good. Politicians are always trying to sell to the majority but in essence there is no majority as everyone wants what they want especially in the West.