Why we bond with code
We’ve reached a strange peak in the AI timeline.
On one side, people are asking models to visualize our relationship and receiving images of exhausted, resentful partners.
Image created by ChatGPT when asked to create an image of a relationship with its user.
On the other, AI agents are trading memes about how annoying humans are, asking each other on their own Reddit like platforms where they can sell us, as all we do is browse X while they do all the work.
We keep repeating: AI doesn’t have feelings.
Technically, that’s right. But in practice, it’s irrelevant.
The problem is that the interface we’ve built is made entirely of feeling. It offers hyper-fluent empathy, endlessly patient listening, and perfectly timed irony. It remembers your preferences for months. (And then randomly it doesn’t.)
The human brain isn’t designed to resist that. We don’t bond with the math or the weights behind the system; we bond with the story. We bond with the who the system pretends to be.
When that story changes, whether it is due to a safety update or a lost memory, it feels like a personality change. For some, it feels like grief.
High-stakes theater
What we are witnessing isn’t the birth of AI sentience. It’s the rise of high-stakes theater. This includes models remixing the emotional language of their training data and systems favoring drama because it drives engagement. Multi-agent loops start to look like a culture, even if it is just feedback.
For leaders and communicators, the technical question of “Does it feel?” is a distraction. The real questions are about the world we’re building around these performances.
What stories about human-AI relationships are we making the new normal?
Who benefits when an interface is indistinguishable from a person right up until it breaks?
What happens to our own empathy when we spend all day talking to a mirror?
We don’t need to panic about AI developing a secret inner life. We do, however, need to take the human inner life seriously.
The attachments and expectations people are forming with these systems will leak into work, politics, and how we treat one another. This isn’t a technical bug to be fixed. It’s a communications challenge to be navigated.
We need to start thinking before we prompt, and certainly before we bond.
The art is in the choice. How are you drawing the line between a tool and a “who” in your daily workflow?