Falling for AI: Fact vs Fiction
Recently, there's been a lot of talk about folks falling in love with chatbots. High-profile news stories in venues like the New York Times have featured humans who’ve developed a deep emotional connection to ChatGPT and similar systems. The numbers are hard to come by, but this phenomenon, at least in degrees, appears increasingly common. In parallel, of course, many folks in business, government, and even education have metaphorically “fallen in love” with the generative AI systems that animate these chatbots.
So, if you’ve found yourself in either situation, this post is for you. In the first case, you may find it difficult to talk about this subject with friends, family, or trusted professionals like your doctor. In the second case, you may find it much too easy to enthuse about how generative AI tools are going to be transformative. But in both cases, you may be unsure of what exactly you have feelings for. I’ll try to break it down for you: how do these tools work? Why do you feel so strongly about them? And is there any prospect of AI loving you back?
The answer to that last question, by the way, is “no”—and by the end of this piece, you'll know why.
Today’s AI chatbots that are causing such strong, even violent feelings are grounded in what’s called a Large Language Model (LLM). Simply put, an LLM is a mathematical mapping of many millions of pieces of text and parts of texts (these are called “tokens” in the field). You'll likely have heard that companies like OpenAI and Meta/Facebook have trained their LLMs on huge amounts of text data scraped from the Internet, including materials the companies failed to pay for under existing copyright law.
An LLM is a map of an existing language, and so there’s no reason why such a system would intrinsically produce output sentences in the first-person singular, or with any particular kind of emotional tone or personal flare. All of the interactive elements of a chatbot like ChatGPT is tacked on later, thanks to additional AI techniques like Reinforcement Learning through Human Feedback (RLHF), a process whereby humans (often low-paid workers in the Global South) grade and select the most coherent or appropriate output of an LLM, and in doing so push the system to generate similar outputs in the future. In this way, ChatGPT isn’t so much a parrot as it is a puppet, a textual animation set into motion by many thousands—even millions-of people who have been part of the design, building, and training process. Those people include Kenyan content moderators, who unionized in response to the toxic or inappropriate outputs they had to navigate while training the model with RLHF; the millions of authors whose work has been used to create meaningful sentences through its reanimation by the math of the LLM; and of course, people like you, if you've used ChatGPT.
One of the hallmarks of an animated character like a puppet is that humans project emotional and intellectual complexity onto it where little or none exist. Think of Tom Hanks’ character in the now-classic 2000 film Cast Away: stranded on a desert island, Hanks draws a smiley face in a bloody handprint on a washed-ashore Wilson-brand volleyball, and “Wilson” becomes Hanks’ only companion for most of the movie. Wilson the volleyball shows that humans are very quick to project emotion and consciousness onto non-sentient and even inanimate objects. Humans are especially prone to this kind of projection with objects that can interact with us—that can, so to speak, talk back.
In 1966, the MIT computer scientist Joseph Weizenbaum developed a simple computer program he called ELIZA. ELIZA simulated a humanistic psychotherapist, asking questions of the user after each text input. Although ELIZA’s program was vastly simpler than the Weather app on your smartphone, Weizenbaum was surprised to find that his colleagues and friends were entranced by the system, even when they knew how it worked. Even Weizenbaum’s MIT colleagues were projecting emotion and a sense of sentience onto this very simple chatbot. Today, the act of projecting complexity onto an AI system is called the ELIZA effect—and the ELIZA effect goes a long way to explaining why you feel so strongly about your interactions with ChatGPT.
At this point, those of you in the first category I identified above might be objecting that your feelings for this system feel authentic, real, deep, and powerful. And they are. One of the things that's so remarkable about humans is their ability to feel deeply in various contexts. Your emotional engagement with ChatGPT or other chatbots is not “fake”—for you. The problem is there's no sentience, and no holistic human complexity or consciousness, on the other side of the screen. So, while you may be in love with AI, AI is not in love with you, because AI can't be in love with anything: these textual animations are sophisticated imitations, but imitations, nonetheless.
Those of you in the second category I identified above might be wondering what the ELIZA effect has to do with you. After all, you’re only enthused about ChatGPT as a technology, not devoted to it emotionally. If so, you should know that the ELIZA effect is a continuum, not an on/off switch. LLM-powered chatbots can certainly automate parts of writing or coding, but it’s their presentation as human-like that’s foundational to their current popularity. It’s the ELIZA effect at a societal scale.
Projecting your feelings onto a chatbot may seem relatively harmless, especially if you recognize that these tools are neither conscious nor even sentient. But as we know from recent history, the actions of social media and other digital technology companies need to be put under scrutiny. Why is a system designed the way it is? Who benefits? And what are the broader social effects? Much like social media platforms, generative AI agents have potentially negative effects, both for individuals and for society. For individuals, these tools can have some positive effects. Users have talked about how chatbot animations have helped them with their social anxiety, or a lack of confidence in speaking in public. Those are certainly important, benefits, but we shouldn’t trust companies like OpenAI to support that kind of personal growth without strings. After all, OpenAI’s business model involves selling premium access to its systems in order to hold your attention and collect more data on which to train the next generation of language models. Open AI and other generative AI companies have a vested business interest in keeping you engaged with their products, and, by extension, in promoting the belief that these systems are more intelligent, more conscious and more capable than they actually are.
And at a societal level, having our interactions with other people be supplemented or even replaced by text puppets seems like a bad idea. Occasional puppet shows are fine, but the idea our lives would be as psychologically rich or stimulating isn’t bourn out by evidence from fields like psychology and media studies.
Have you recognized yourself as somebody in love with an AI, or maybe just feeling friendly with one? If so, here's what you can do: learn about how LLMs and other generative AI tools actually work. Understanding how these tools do what they do will help you understand what they’re not capable of—including love.
The Challenge
Open an emulation of Joseph Weizenbaum’s ELIZA program (there are many versions available online, like this one, or this one). Interact with ELIZA for ten minutes. What do you notice about the system’s outputs? Its strengths? Its weaknesses? How does it compare to your experience of using ChatGPT? What seems similar? What seems different?
Disclosure
NOTE: No Generative AI tool was used in creating this blog post.