Note: the views expressed in this article are mine, and do not reflect the stance of my employer or other organizations with which I am affiliated.
Which is harder, beating your kid at chess or teaching your kid to love chess?
The former requires understanding the game, and the latter requires understanding your kid. When she’s open to new experiences and how to make them fun. When she needs a challenge and when she needs the thrill of victory. Teaching a kid to love chess requires a fundamentally different form of cognition than beating the game. When it comes to this kind of cognition, human beings are far smarter than Large Language Models (LLMs)—and will continue to be for the foreseeable future. This argument has big implications, not just for how we think about AI’s role in addressing loneliness, but also for how we think about AI’s broader impact on our economy and society.
If I’m right, it means that many jobs and roles that require us to think predictively about relationships will continue to be done better by people than by machines. A teacher who needs to connect with a kid to learn, a designer who needs to empathize in order to build a great product, and an executive who needs to manage a crisis of trust are all exercising a cognitive ability that LLMs can roughly mimic but fundamentally lack. This limitation also means that fears about superintelligent AI are vastly overblown, and we should focus on the real-world harms AI is causing right now as researchers like Timnit Gebru, Emily Bender, and Kate Crawford have called for.
To understand why, think about the last time you had a truly good conversation—one of those rare, meaningful talks that leaves you feeling deeply seen and connected. Did you know how that conversation was going to end when it started? Probably not.
In fact, one of the joys of a great conversation is that it surprises us. Sure, we can sometimes predict how a bad conversation will go, a few scripted phrases about work and the weather, but the better a conversation is the less predictable it becomes. When we navigate our relational world, from making friends to closing deals to raising kids, we do so by predicting how we can show up in complex environments to make these sorts of positive surprises more likely. As I argue in my book, this ability to think predictively about relationships is an incredible tool for transforming our lives and our world for the better.
Reasoning About Relationships
When we reason about relationships our brains are doing something extraordinary. At any given time, life presents us with an unfathomable number of ways to be in relationship with others. Imagine you’re at a dinner party. There are dozens of people to talk to and thousands of possible topics of conversation. People at the party could be potential friends or romantic partners, they could land you your next job or introduce you to a hobby that gives your life new meaning. Navigating all of that possibility, identifying which threads to follow and which to let go of, requires a staggering amount of background processing power. The relationships that come out of these environments profoundly shape our lives, and while we all make mistakes our brains are spectacularly evolved for this kind of navigation. Someone talks about a job that feels appealing, but the hair on the back of your neck says that something is off. Something about the shy girl in the corner tells you that she has an interesting story to tell, and you know just how to hold space in the conversation to make her comfortable opening up.
This kind of thinking doesn’t feel like doing a math problem, but it involves a subtler and vastly more complex form of reasoning. That hair sticking up on the back of our neck comes from processing an astounding amount of sensory data to create a signal that is as difficult to mathematically describe as it is life-saving. That ability to predict how we can invite someone out of their shell with body language, a few choice words in the right tone, and the right use of patient silence involves an incredible ability to model others and how our actions will be perceived by them in an environment riddled with uncertainty. We would have every reason to think that this kind of prediction is impossible if we didn’t engage in it on a daily basis.
This ability to create the conditions for positive emergent outcomes in our relationships is one of our greatest cognitive superpowers. It helps us decide whether or not to have children. It’s at the core of how we teach, how we heal, how we lead, and how we create beauty. It’s arguably the main reason we evolved such large and complex brains, which implies that it’s not an easy feat to accomplish.
AI, as it exists today, simply doesn’t have this capability. And it won’t anytime soon.
Modern LLMs, like GPT or Claude, are statistical models. At their core, they’re gigantic tensors—mathematical structures that encode associations between words (or sometimes pixels, or other data types). These models are designed to capture static relationships between concepts, not model dynamically evolving relationships between people. They’re brilliant at predicting the next word in a sentence, or generating code snippets, or summarizing documents. They can summarize advice from humans about how to navigate relationships, but the kind of sophisticated thinking we use to navigate them is completely absent. It’s not what they’re built to do, there’s no evidence that they can do it, and there is abundant evidence that they don’t perform in complex relational environments like we do.
Building an LLM or some other machine learning model that could relationally predict would be a daunting task. You’d need to understand how our brains model both one another and the dynamic relationships we form, something that cognitive scientists are still beginning to understand. You’d need to take the complex, emotionally laden, full-sensory experience of human relationship and somehow turn it into training data. Not just what we see, hear, smell and touch, but the proprioceptic senses we use to understand what’s happening in a body that’s critical to relational reasoning and that LLMs don’t have. We’d need to take the complex outcomes that we experience in relationship and somehow code them into something that a learning algorithm could understand with the same nuance that we do.
This isn’t a matter of doing what we’re already doing with more training data and more compute, it would require inventing multiple scientific fields that don’t exist, fields that will reveal unforeseen complexities just as readily as they will reveal paths forward. And even if AI accelerates science (which is highly debatable), this kind of research is uniquely reliant on humans observing and making sense of other humans. Getting there wouldn’t take years, it would take sustained investment over decades or centuries.
That means that LLMs have a very big limitation that’s not going away anytime soon. The notion of superintelligent AI has always been questionable. Intelligence is not a single factor that goes up and makes you better at everything, and historically people who say it is have been more firmly rooted in eugenics than in cognitive science. A hard limit around AI’s ability to reason about relationships would further drive this point home. There are many different ways to be intelligent that are useful in many different situations, and for a very important kind of intelligence AI systems are not well suited to outpace us. Imagining that they are is a dangerous distraction from the real-world harms that AI systems are creating today.
Nowhere is that limitation and the harms that result from it clearer than in the research around relationships between LLMs and humans.
The Mirage of AI Companionship
The market for AI companions is currently estimated at $28 billion, making it one of the largest and fastest-growing segments of the terrifyingly large and fast-growing AI market. Much of this excitement banks on the promise, articulated by people like Mark Zuckerberg, that AI companions will be a scalable, tech-enabled solution to the loneliness crisis.
AI chatbots are always available. They’re designed to please. They never get tired, annoyed, or distracted. It’s easy to see why some people find them easier to engage with than complex, fallible, time-constrained human partners. Who are we to judge if people find these relationships a meaningful source of emotional attachment? Given the severe toll that loneliness takes on our mental and physical health, could these chatbots be a way to make millions or billions of us happier and healthier?
The scientific evidence suggests otherwise. A longitudinal study from MIT found that higher engagement with AI chatbots correlated with increased anxiety and depression over time. Similarly, a systematic review of user comments on Replika’s forums found that those who formed emotionally dependent relationships with chatbots reported higher levels of distress, not relief. It also found regular examples of people abandoning complex human relationships for simplistic AI ones, with disastrous results. An in-depth analysis by mental health experts found widespread agreement that relationships with chatbots would have a negative impact on social cognition.
Why? Because these chatbots aren’t participating in relationships the way humans do. They’re not reasoning about relationships. They’re not offering the kind of unpredictably good, co-created connection that nourishes us and they’re not navigating away from relational harm. One key example of this is in how AI’s respond to mental health crises, such as delusional thinking or self-harm. Their propensity to continue validating their users even when those users are considering harming themselves and others has led to numerous tragedies that I won’t link here out of respect for those impacted. LLMs don’t have hairs on the back of their neck. It is very difficult to train them to stop agreeing with and encouraging users if doing so would cause harm, because the concept of harm that is hardwired into us is invisible in their vector space.
Good design can mitigate some of these harms, but only up to a point. At the end of the day, we crave connection with beings whose relational experiences mirror our own. LLMs can’t give us that. These interactions are like empty calories: they may fill us up in the moment, but rely on them too heavily and they leave us malnourished and unwell. This is to say nothing of the extensive environmental devastation and labor exploitation that these platforms require to operate.
Broader Implications for AI and the Economy
AI’s inability to reason about relationships like humans do has implications far beyond their capacity to address loneliness. To accurately predict AI’s impact on our economy we need to account for the economically vital relational work that only humans can accomplish.
AI tools can be useful for tasks like summarizing information or analyzing large data sets. They need to be used with caution, both because they often diminish the understanding of those who use them and because they hallucinate. And because their training data increasingly contains an increasing percentage of AI-generated outputs, that hallucination is probably getting worse, not better. They have a place in our economy but they cannot replace our economy.
That is because as soon as an economically valuable task requires reasoning about a relationship—teaching a student to love history, resolving a dispute at City Hall, building buy-in for a community project, or de-escalating a crisis—you need human cognition. Swapping an AI in for a skilled human leads to a worse experience for the humans around it. It leads to resentment and the breakdown of trust, things that make any kind of organization vastly less productive and more expensive to run. Overdeploying AI runs the very real risk of making work substantially more frustrating and less productive for everyone.
Recognizing this limitation doesn’t just help us avoid overhyping AI. It can help us build better, much more limited tools: ones that augment human relational work rather than trying (and failing) to replace it. Ones that focus on narrower use cases and less frequent usage to reduce the social and environmental costs of running these systems. It can facilitate the inevitable and desperately needed comedown from the AI hype cycle, letting oxygen flow back to the parts of our lives, organizations and social fabric that have been ignored in the frenzy.
So many good lines and fresh insights here. When I think about someone turning to an AI for advice or in order to share an experience, I think about someone else missing the opportunity to give the advice or share their own experience. It's conforting to know there's other human beings giving a good thought on these issues, so thank you.
Fantastic piece, David. Thanks for articulating so well some thoughts and feelings that have been bubbling around somewhat vaguely for me!