Anyone posting a threat especially against a law enforcement officer or politician will be banned
6 min read
How the first chatbot predicted the dangers of AI over 50 years ago

From ELIZA onwards, humans love their digital reflections. 

It didn’t take long for Microsoft’s new AI-infused search engine chatbot — codenamed “Sydney” — to display a growing list of discomforting behaviors after it was introduced early in February, with weird outbursts ranging from unrequited declarations of love to painting some users as “enemies.” 

It didn’t take long for Microsoft’s new AI-infused search engine chatbot — codenamed “Sydney” — to display a growing list of discomforting behaviors after it was introduced early in February, with weird outbursts ranging from unrequited declarations of love to painting some users as “enemies.” 

In 1966, MIT computer scientist Joseph Weizenbaum released ELIZA (named after the fictional Eliza Doolittle from George Bernard Shaw’s 1913 play Pygmalion), the first program that allowed some kind of plausible conversation between humans and machines. The process was simple: Modeled after the Rogerian style of psychotherapy, ELIZA would rephrase whatever speech input it was given in the form of a question. If you told it a conversation with your friend left you angry, it might ask, “Why do you feel angry?” 

Ironically, though Weizenbaum had designed ELIZA to demonstrate how superficial the state of human-to-machine conversation was, it had the opposite effect. People were entranced, engaging in long, deep, and private conversations with a program that was only capable of reflecting users’ words back to them. Weizenbaum was so disturbed by the public response that he spent the rest of his life warning against the perils of letting computers — and, by extension, the field of AI he helped launch — play too large a role in society. 

ELIZA built its responses around a single keyword from users, making for a pretty small mirror. Today’s chatbots reflect our tendencies drawn from billions of words. Bing might be the largest mirror humankind has ever constructed, and we’re on the cusp of installing such generative AI technology everywhere. 

But we still haven’t really addressed Weizenbaum’s concerns, which grow more relevant with each new release. If a simple academic program from the ’60s could affect people so strongly, how will our escalating relationship with artificial intelligences operated for profit change us? There’s great money to be made in engineering AI that does more than just respond to our questions, but plays an active role in bending our behaviors toward greater predictability. These are two-way mirrors. The risk, as Weizenbaum saw, is that without wisdom and deliberation, we might lose ourselves in our own distorted reflection. 

ELIZA showed us just enough of ourselves to be cathartic.

Weizenbaum did not believe that any machine could ever actually mimic — let alone understand — human conversation. “There are aspects to human life that a computer cannot understand — cannot,” Weizenbaum told the New York Times in 1977. “It’s necessary to be a human being. Love and loneliness have to do with the deepest consequences of our biological constitution. That kind of understanding is in principle impossible for the computer.” 

That’s why the idea of modeling ELIZA after a Rogerian psychotherapist was so appealing — the program could simply carry on a conversation by asking questions that didn’t require a deep pool of contextual knowledge, or a familiarity with love and loneliness. 

Named after the American psychologist Carl Rogers, Rogerian (or “person-centered”) psychotherapy was built around listening and restating what a client says, rather than offering interpretations or advice. “Maybe if I thought about it 10 minutes longer,” Weizenbaum wrote in 1984, “I would have come up with a bartender.” 

To communicate with ELIZA, people would type into an electric typewriter that wired their text to the program, which was hosted on an MIT system. ELIZA would scan what it received for keywords that it could flip back around into a question. For example, if your text contained the word “mother,” ELIZA might respond, “How do you feel about your mother?” If it found no keywords, it would default to a simple prompt, like “tell me more,” until it received a keyword that it could build a question around. 

Weizenbaum intended ELIZA to show how shallow computerized understanding of human language was. But users immediately formed close relationships with the chatbot, stealing away for hours at a time to share intimate conversations. Weizenbaum was particularly unnerved when his own secretary, upon first interacting with the program she had watched him build from the beginning, asked him to leave the room so she could carry on privately with ELIZA. 

Shortly after Weizenbaum published a description of how ELIZA worked, “the program became nationally known and even, in certain circles, a national plaything,” he reflected in his 1976 book, Computer Power and Human Reason. 

To his dismay, the potential to automate the time-consuming process of therapy excited psychiatrists. People so reliably developed emotional and anthropomorphic attachments to the program that it came to be known as the ELIZA effect. The public received Weizenbaum’s intent exactly backward, taking his demonstration of the superficiality of human-machine conversation as proof of its depth. 

Weizenbaum thought that publishing his explanation of ELIZA’s inner functioning would dispel the mystery. “Once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away,” he wrote. Yet people seemed more interested in carrying on their conversations than interrogating how the program worked. 

If Weizenbaum’s cautions settled around one idea, it was restraint. “Since we do not now have any ways of making computers wise,” he wrote, “we ought not now to give computers tasks that demand wisdom.” 


Sydney showed us more of ourselves than we’re comfortable with

If ELIZA was so superficial, why was it so relatable? Since its responses were built from the user’s immediate text input, talking with ELIZA was basically a conversation with yourself — something most of us do all day in our heads. Yet here was a conversational partner without any personality of its own, content to keep listening until prompted to offer another simple question. That people found comfort and catharsis in these opportunities to share their feelings isn’t all that strange. 

But this is where Bing — and all large language models (LLMs) like it — diverges. Talking with today’s generation of chatbots is speaking not just with yourself, but with huge agglomerations of digitized speech. And with each interaction, the corpus of available training data grows. 

LLMs are like card counters at a poker table. They analyze all the words that have come before and use that knowledge to estimate the probability of what word will most likely come next. Since Bing is a search engine, it still begins with a prompt from the user. Then it builds responses one word at a time, each time updating its estimate of the most probable next word. 

Once we see chatbots as big prediction engines working off online data — rather than intelligent machines with their own ideas — things get less spooky. It gets easier to explain why Sydney threatened users who were too nosy, tried to dissolve a marriage, or imagined a darker side of itself. These are all things we humans do. In Sydney, we saw our online selves predicted back at us. 

But what is still spooky is that these reflections now go both ways. 

From influencing our online behaviors to curating the information we consume, interacting with large AI programs is already changing us. They no longer passively wait for our input. Instead, AI is now proactively shaping significant parts of our lives, from workplaces to courtrooms. With chatbots in particular, we use them to help us think and give shape to our thoughts. This can be beneficial, like automating personalized cover letters (especially for applicants where English is a second or third language). But it can also narrow the diversity and creativity that arises from the human effort to give voice to experience. By definition, LLMs suggest predictable language. Lean on them too heavily, and that algorithm of predictability becomes our own.

For-profit chatbots in a lonely world

If ELIZA changed us, it was because simple questions could still prompt us to realize something about ourselves. The short responses had no room to carry ulterior motives or push their own agendas. With the new generation of corporations developing AI technologies, the change is flowing both ways, and the agenda is profit.

Staring into Sydney, we see many of the same warning signs that Weizenbaum called attention to over 50 years ago. These include an overactive tendency to anthropomorphize and a blind faith in the basic harmlessness of handing over both capabilities and responsibilities to machines. But ELIZA was an academic novelty. Sydney is a for-profit deployment of ChatGPT, which is a $29 billion dollar investment, and part of an AI industry projected to be worth over $15 trillion globally by 2030. 

The value proposition of AI grows with every passing day, and the prospect of realigning its trajectory fades. In today’s electrified and enterprising world, AI chatbots are already proliferating faster than any technology that came before. This makes the present a critical time to look into the mirror that we’ve built, before the spooky reflections of ourselves grow too large, and ask whether there was some wisdom in Weizenbaum’s case for restraint. 

As a mirror, AI also reflects the state of the culture in which the technology is operating. And the state of American culture is increasingly lonely

To Michael Sacasas, an independent scholar of technology and author of The Convivial Society newsletter, this is cause for concern above and beyond Weizenbaum’s warnings. “We anthropomorphize because we do not want to be alone,” Sacasas recently wrote. “Now we have powerful technologies, which appear to be finely calibrated to exploit this core human desire.” 

The lonelier we get, the more exploitable by these technologies we become. “When these convincing chatbots become as commonplace as the search bar on a browser,” Sacases continues, “we will have launched a social-psychological experiment on a grand scale which will yield unpredictable and possibly tragic results.”

 We’re on the cusp of a world flush with Sydneys of every variety. And to be sure, chatbots are among the many possible implementations of AI that can deliver immense benefits, from protein-folding to more equitable and accessible education. But we shouldn’t let ourselves get so caught up that we neglect to examine the potential consequences. At least until we better understand what it is that we’re creating, and how it will, in turn, recreate us.              


Anyone posting a threat especially against a law enforcement officer or politician will be banned.Creative Commons License
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.