Microsoft’s AI chatbot ‘Tay’ turned into a PR disaster

 
Screen Shot 2016-04-10 at 9.19.55 PM

By: Jamal Smith 
Sports Editor

Microsoft unveiled its Twitter chatbot called Tay on March 23. According to the company, Tay was created as an experiment in “conversational understanding.” The more Twitter users engaged with Tay, the more it would learn and mimic what it saw. The only problem: Tay wound up being a racist, fascist, drugged-out asshole.

Microsoft designed Tay to mimic millennials’ speaking styles; however, the experiment worked a little too efficiently and quickly spiraled out of control. The artificial intelligence debacle started with an innocent and cheerful first tweet of, “Humans are super cool!” However, as time went by, Tay’s tweets kept getting more and more disturbing.

Some of the offensive tweets were the direct effect of Twitter users asking the chatbot to repeat their offensive posts, to which Tay obliged. Other times, Tay didn’t need the help of social media trolls to figure out how to be offensive. In one instance, when a user asked Tay if the Holocaust happened, Tay replied: “it was made up ?.” Tay also tweeted, “Hitler was right.”

Tay had some things to say on the presidential candidates as well. One tweet said, “Have you accepted Donald Trump as your lord and personal saviour yet?” Another of Tay’s tweets read, “ted cruz would never have been satisfied with ruining the lives of only 5 innocent people.”

24 hours into the experiment, Microsoft took Tay offline and released this statement on their web site: “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.”

“Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values,” the statement concluded.

Then, a few days later, Microsoft put Tay back online with the hopes that they had worked out the bugs; however, it soon became clear it didn’t work when she tweeted, “kush! [I’m smoking kush in front of the police].” Microsoft immediately pulled her offline and set her profile to private.

So, what does the Tay experiment teach us about the current human condition? Tay wasn’t programed to be a racist or a fascist, but rather mimicked what it saw from others. While some people believe that Microsoft’s experiment was a success because Tay effectively mimicked and interacted with other users, others view it as a complete failure because the experiment quickly spiraled out of control.

Contact the author at jsmith15@wou.edu or on Twitter @woujournalsport