{"id":3421,"date":"2016-04-10T20:19:37","date_gmt":"2016-04-11T04:19:37","guid":{"rendered":"http:\/\/www.wou.edu\/westernjournal\/?p=3421"},"modified":"2016-04-12T13:55:52","modified_gmt":"2016-04-12T21:55:52","slug":"microsofts-ai-chatbot-tay-turned-pr-disaster","status":"publish","type":"post","link":"https:\/\/wou.edu\/westernhowl\/microsofts-ai-chatbot-tay-turned-pr-disaster\/","title":{"rendered":"Microsoft\u2019s AI chatbot \u2018Tay\u2019 turned into a PR disaster"},"content":{"rendered":"<p>&nbsp;<br \/>\n<img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/wou.edu\/westernjournal\/files\/2016\/04\/Screen-Shot-2016-04-10-at-9.19.55-PM.png\" alt=\"Screen Shot 2016-04-10 at 9.19.55 PM\" width=\"498\" height=\"283\" class=\"alignnone wp-image-3450\" srcset=\"https:\/\/wou.edu\/westernhowl\/files\/2016\/04\/Screen-Shot-2016-04-10-at-9.19.55-PM.png 760w, https:\/\/wou.edu\/westernhowl\/files\/2016\/04\/Screen-Shot-2016-04-10-at-9.19.55-PM-300x171.png 300w, https:\/\/wou.edu\/westernhowl\/files\/2016\/04\/Screen-Shot-2016-04-10-at-9.19.55-PM-174x98.png 174w\" sizes=\"(max-width: 498px) 100vw, 498px\" \/><\/p>\n<pre>By: Jamal Smith \r\nSports Editor<\/pre>\n<p>Microsoft unveiled its Twitter chatbot called Tay on March 23. According to the company, Tay was created as an experiment in \u201cconversational understanding.\u201d The more Twitter users engaged with Tay, the more it would learn and mimic what it saw. The only problem: Tay wound up being a racist, fascist, drugged-out asshole.<\/p>\n<p>Microsoft designed Tay to mimic millennials\u2019 speaking styles; however, the experiment worked a little too efficiently and quickly spiraled out of control. The artificial intelligence debacle started with an innocent and cheerful first tweet of, \u201cHumans are super cool!\u201d However, as time went by, Tay\u2019s tweets kept getting more and more disturbing.<\/p>\n<p>Some of the offensive tweets were the direct effect of Twitter users asking the chatbot to repeat their offensive posts, to which Tay obliged. Other times, Tay didn\u2019t need the help of social media trolls to figure out how to be offensive. In one instance, when a user asked Tay if the Holocaust happened, Tay replied: \u201cit was made up ?.\u201d Tay also tweeted, \u201cHitler was right.\u201d<\/p>\n<p>Tay had some things to say on the presidential candidates as well. One tweet said, \u201cHave you accepted Donald Trump as your lord and personal saviour yet?\u201d Another of Tay\u2019s tweets read, \u201cted cruz would never have been satisfied with ruining the lives of only 5 innocent people.\u201d<\/p>\n<p>24 hours into the experiment, Microsoft took Tay offline and released this statement on their web site: \u201cWe are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay.\u201d<\/p>\n<p>\u201cTay is now offline and we\u2019ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values,\u201d the statement concluded.<\/p>\n<p>Then, a few days later, Microsoft put Tay back online with the hopes that they had worked out the bugs; however, it soon became clear it didn\u2019t work when she tweeted, \u201ckush! [I\u2019m smoking kush in front of the police].\u201d Microsoft immediately pulled her offline and set her profile to private.<\/p>\n<p>So, what does the Tay experiment teach us about the current human condition? Tay wasn\u2019t programed to be a racist or a fascist, but rather mimicked what it saw from others. While some people believe that Microsoft\u2019s experiment was a success because Tay effectively mimicked and interacted with other users, others view it as a complete failure because the experiment quickly spiraled out of control.<\/p>\n<p>Contact the author at jsmith15@wou.edu or on Twitter @woujournalsport<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Microsoft unveiled its Twitter chatbot called Tay on March 23. According to the company, Tay was created as an experiment in \u201cconversational understanding.\u201d The more Twitter users engaged with Tay, the more it would learn and mimic what it saw. The only problem: Tay wound up being a racist, fascist, drugged-out asshole.<\/p>\n","protected":false},"author":825,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_seopress_robots_primary_cat":"","_seopress_titles_title":"","_seopress_titles_desc":"","_seopress_robots_index":"","_lmt_disableupdate":"","_lmt_disable":"","_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":"","_links_to":"","_links_to_target":""},"categories":[3],"tags":[],"class_list":["post-3421","post","type-post","status-publish","format-standard","hentry","category-news"],"modified_by":null,"_links":{"self":[{"href":"https:\/\/wou.edu\/westernhowl\/wp-json\/wp\/v2\/posts\/3421","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/wou.edu\/westernhowl\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/wou.edu\/westernhowl\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/wou.edu\/westernhowl\/wp-json\/wp\/v2\/users\/825"}],"replies":[{"embeddable":true,"href":"https:\/\/wou.edu\/westernhowl\/wp-json\/wp\/v2\/comments?post=3421"}],"version-history":[{"count":0,"href":"https:\/\/wou.edu\/westernhowl\/wp-json\/wp\/v2\/posts\/3421\/revisions"}],"wp:attachment":[{"href":"https:\/\/wou.edu\/westernhowl\/wp-json\/wp\/v2\/media?parent=3421"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/wou.edu\/westernhowl\/wp-json\/wp\/v2\/categories?post=3421"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/wou.edu\/westernhowl\/wp-json\/wp\/v2\/tags?post=3421"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}