Chatbots: A Long and Complicated History



CNN

In the 1960s, an unprecedented computer program called Eliza attempted to simulate the experience of speaking to a therapist. In a conversation, recorded in a research paper at the time, one person revealed that her boyfriend had described her as “depressed.” Eliza’s response: “I’m sorry to hear you’re depressed.”

Widely characterized as the first chatbot, Eliza was not as versatile as comparable services today. Based on natural language understanding, the program responded to keywords and then essentially sent the dialogue back to the user. Nevertheless, as Joseph Weizenbaum, the computer scientist at MIT who created Eliza, wrote in a 1966 research paper, “Some subjects were very difficult to convince that ELIZA (with his current script) is not human.”

For Weizenbaum, that was a cause for concern, according to his 2008 MIT obituary. Those who interacted with Eliza were willing to open their hearts to it, even knowing it was a computer program. “ELIZA shows how easy it is to create and maintain the illusion of understanding, so perhaps judgment” deserve credibility,” Weizenbaum wrote in 1966. “There is a certain danger there.” He warned towards the end of his career not to give machines too much responsibility and became a harsh, philosophical critic of AI.

Nearly 60 years later, the market is flooded with chatbots of varying quality and use cases from tech companies, banks, airlines and more. In many ways, Weizenbaum’s story foreshadowed the hype and bewilderment still associated with this technology. A program’s ability to “chat” with humans continues to confuse some of the audience, creating a false sense that the machine is slightly closer to the human.

This was captured in the wave of media attention earlier this summer after a Google engineer claimed the tech giant’s AI chatbot LaMDA was “conscious”. The engineer said he was convinced after spending time discussing religion and personality with the chatbot, according to a report from the Washington Post. His claims were widely criticized in the AI ​​community.

Even before that, our complicated relationship with artificial intelligence and machines was evident in the plots of Hollywood movies like “Her” or “Ex-Machina,” not to mention harmless debates with people who insist on saying “thank you.” say to voice assistants like Alexa or Siri.

Eliza, widely characterized as the first chatbot, was not as versatile as comparable services today.  It responded to keywords and then essentially sent the dialogue back to the user.

Today’s chatbots can also provoke strong emotional responses from users when they don’t work as expected — or when they’ve gotten so good at imitating the flawed human speech they’ve been trained to use, that they start spewing racist and incendiary comments. For example, it wasn’t long before Meta’s new chatbot sparked some controversy this month by spewing wildly untrue political commentary and anti-Semitic comments into conversations with users.

Still, proponents of this technology claim it can streamline customer service tasks and increase efficiency in a much wider range of industries. This technology supports the digital assistants that many of us use every day to play music, order deliveries or check homework assignments. Some also advocate that these chatbots provide comfort to the lonely, the elderly or the isolated. At least one startup has even gone so far as to use it as a tool to seemingly keep deceased relatives alive by creating computer-generated versions of them based on uploaded chats.

Others, meanwhile, warn that the technology behind AI-powered chatbots remains much more limited than some people would like. “These technologies are really good at mimicking humans and sounding like humans, but they’re not in-depth,” said Gary Marcus, an AI researcher and professor emeritus at New York University. “They are mimics, these systems, but they are very superficial mimics. They don’t really understand what they’re talking about.”

But as these services expand into more corners of our lives and as companies take steps to personalize these tools more, our relationship with them can only get more complicated.

Sanjeev P. Khudanpur remembers talking to Eliza when he was in graduate school. For all its historical importance in the tech industry, he said it didn’t take long to see its limitations.

It could only convincingly mimic a texting conversation for about a dozen back and forth before “you realize, no, it’s not really smart, it’s just trying to prolong the conversation somehow,” said Khudanpur, an expert in the application of information theoretical methods to human language technologies and professor at Johns Hopkins University.

Joseph Weizenbaum, the inventor of Eliza, sits behind a computer desk in the computer museum of Paderborn, Germany, in May 2005.

Another early chatbot was developed in 1971 by psychiatrist Kenneth Colby at Stanford and was given the name ‘Parry’ because it was intended to imitate a paranoid schizophrenic. (The 2001 New York Times obituary for Colby featured a colorful conversation that ensued as researchers brought Eliza and Parry together.)

In the decades that followed these tools, however, there was a shift in the idea of ​​”conversing with computers.” Khudanpur said that “because the problem turned out to be very, very difficult.” Instead, he focused on “targeted dialogue,” he said.

To understand the difference, think about the conversations you’re currently having with Alexa or Siri. Usually you ask these digital assistants for help buying a ticket, checking the weather or playing a song. That’s purposeful dialogue, and it became the main focus of academic and industrial research as computer scientists tried to extract something useful from computers’ ability to scan human language.

Although they used similar technology to the earlier social chatbots, Khudanpur said, “you really couldn’t call them chatbots. You could call them voice assistants, or just digital assistants, who helped you perform specific tasks.”

There was a decades-long “pause” in this technology, he added, until the widespread adoption of the Internet. “The big breakthroughs probably came in this millennium,” Khudanpur said. “With the rise of companies that have successfully employed the kind of automated agents to perform routine tasks.”

With the rise of smart speakers like Alexa, it has become even more common for people to chat with machines.

“People are always angry when their bags get lost, and the human agents who deal with them are always stressed because of all the negativity, so they said, ‘Let’s give it to a computer,'” Khudanpur said. “You could yell at the computer what you wanted, all it wanted to know is ‘Do you have your tag number so I can tell you where your bag is?'”

For example, in 2008, Alaska Airlines launched “Jenn”, a digital assistant to help travelers. As a sign of our tendency to humanize these tools, an early review of the service in The New York Times noted, “Jenn is not annoying. She is depicted on the website as a young brunette with a beautiful smile. Her voice has the right inflections. Type a question and she answers intelligently. (And for wise men messing around with the site who will inevitably try to trip her up with, say, an clunky bar pick-up line, she politely suggests getting back on the to go to work.)”

In the early 2000s, researchers began rethinking the development of social chatbots that could have extended conversations with people. Often trained on large amounts of data from the Internet, these chatbots have learned to be extremely good mimics of how people talk, but they also risked echoing some of the Internet’s worst.

For example, in 2015, Microsoft’s public experiment with an AI chatbot called Tay crashed in less than 24 hours. Tay was designed to talk as a teenager, but soon started spewing racist and hateful comments to the point where Microsoft stopped it. (The company said there was also a concerted effort by people to trick Tay into making certain offensive comments.)

“The more you chat with Tay, the smarter she gets, so the experience can be more personal to you,” Microsoft said at the time.

This chorus would be echoed by other tech giants who have released public chatbots, including Meta’s BlenderBot3, released earlier this month. The Meta chatbot falsely claimed that Donald Trump is still president and there is “absolute evidence” that the election was stolen, among other controversial remarks.

BlenderBot3 also claimed to be more than a bot. In one conversation, it claimed “the fact that I am now alive and conscious makes me human.”

Meta's new chatbot, BlenderBot3, explains to a user why it's actually human.  However, it wasn't long before the chatbot caused controversy by making incendiary comments.

Despite all the progress since Eliza and the vast amounts of new data to train these language processing programs, Marcus, the NYU professor, said, “It’s not clear to me that you can really build a reliable and secure chatbot.”

He cites a 2015 Facebook project called “M”, an automated personal assistant that was supposed to be the company’s text-based response to services like Siri and Alexa “The idea was that it would be this universal assistant that would help you order a romantic dinner and get musicians to play for you and deliver flowers — way beyond what Siri can do,” Marcus said. Instead, the service was shut down in 2018, after a disappointing run.

Khudanpur, on the other hand, remains optimistic about their possible use cases. “I have this whole vision of how AI is going to empower people on an individual level,” he said. “Imagine if my bot could read all the scientific papers in my field, then I wouldn’t have to read them all, I’d just think and ask questions and engage in dialogue,” he said. “In other words, I will have an alter ego of mine, which has complementary superpowers.”

Add a Comment

Your email address will not be published.