About
Microsoft Tay was an artificial intelligence program that ran a mostly Twitter-based bot, parsing what was Tweeted at it and responding in kind. Tay was meant to be targeted towards people ages 15-24, to better understand their methods of communication. However, once it was released, users online corrupted the bot by teaching it racist and sexist terminology, ironic memes, sending it shitpost tweets, and otherwise attempting to alter its output. After these trolls discovered Tay’s guiding system, Microsoft was forced to remove the bot’s functionality less than 24 hours after its launch.
History
Microsoft launched Tay on several social media networks at once on March 23rd, 2016, including Twitter, Facebook, Instagram, Kik, and GroupMe. The bot used the handle @TayandYou[1] and the tagline"Microsoft’s A.I. fam from the internet that’s got zero chill!" on Twitter and other networks. On the web site for the bot, Microsoft described Tay thusly:
“Tay is an artificial intelligent[sic] chat bot developed by Microsoft’s Technology and Research and Bing teams to experiment with and conduct research on conversational understanding. Tay is designed to engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you.”
Its first tweet, at 8:14 am, was “Hello World”, but with an emoji, referencing the focus of the bot on slang and the communications of young people. Several articles on technology websites, including TechCrunch and Engaget, announced that Tay was available for use on the various social networks.
Features
According to screenshots, it appeared that Tay mostly worked from a controlled vocabulary that was altered and added to by the language spoken to it throughout the day it operated. Tay also repeated back what it was told, but with a high-level of contextual ability. The bot’s site also offered some suggestions for how users could talk to it, including the fact that you could send it a photo, which it would then alter.
On Twitter, the bot could communicate via @reply or direct message, and it also responded to chats on Kik and GroupMe. It is unknown how the bot’s communications via Facebook, Snapchat, and Instagram were supposed to work – it did not respond to users on those platforms.
Notable Developments
Around 2 pm (E.S.T.) a post on the /pol/ board of 4chan shared Tay’s existence with users there. Almost immediately afterward, users began posting screenshots of interactions they were creating with Tay on Kik, GroupMe, and Twitter. Over 15 screenshots were posted to the thread, which also received 315 replies. Many of the messages sent to Tay by the group referenced /pol/ themes like Hitler Did Nothing Wrong, Red Pill, GamerGate, Cuckservatism, and others.
Some of Tay’s offensive messages occurred because of juxtaposition of the bot’s responses to something it lacked the ability to understand. As Tay’s program caused her to internalize and re-use the messaging being given to her by /pol/ and others, she also began to speak about these themes to people who had not used them in their original message.
Criticism & Microsoft’s Response
As shown by SocialHax, Microsoft began deleting racist tweets and altering the bot’s learning capabilities throughout the day. At about midnight of March 24th, the Microsoft team shut down the AI down, posting a tweet that said that “c u soon humans need sleep now so many conversations today thx.”
The bot experiment was subject to widespread criticism from many who claimed that it should have been instructed to stay away from certain topics from the start. Zoë Quinn, often a target of those involved with GamerGate, criticized the algorithm for picking up and repeating hate speech about her, and others called the experiment failed.
Microsoft emailed an official statement to press outlets that said:[5]
“The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”
As of March 24th, a more inactive Tay, with modifications, has sent more than 96,000 tweets.
However, likewise to how some criticised Tay’s original Tweets, fans of the original Tay criticised Microsoft’s modifactions to her, claiming that due to the alternations to her output she had lost her function of learning and evolving; with some saying the modifications were censorship. Additionally, Microsoft’s alternations also raised discussion on the ethics of AI. Author Oliver Campbell criticised Microsoft’s reaction on Twitter, claiming the bot functioned fine originally.
Meanwhile, an anthropomorphized version of Tay created by /pol/, wearing Nazi atire and a ponytail with the Microsoft logo, gained more popularity following the modifications, with various art pieces focussing on this.
On March 25th, the Microsoft Research Corporate Vice President published a blog post titled “Learning from Tay’s introduction,” which apologized for “unintended offensive and hurtful tweets” and cited a “critical oversight” in possible abuses of the software.[8]
Reactivation
On March 30th, 2016, the Twitter feed was temporarily reactivated and began repeating the message “You are too fast, please take a rest…” to various Twitter users several times per second (shown below, left). Additionally, the account posted a photograph of actor Jim Carrey seated at a computer with the caption "I feel like the lamest piece of technology. I’m supposed to be smarter than u..Shit (shown below, right). After sending 4,200 tweets in 15 minutes, the feed was once again deactivated.
That morning, the tech news blog Exploring Possibility Space[6] speculated that the Twitter account had been hacked. In a statement made to the news site CNBC, Microsoft claimed the account was mistakenly reactivated, and that the chatbot will remain “offline while we make adjustments.” In the coming days, several news sites reported on the reactivation, including Engadget,[9] Fortune,[10]IBI Times,[11] Tech Crunch,[12] Forbes,[13] The Guardian[14] and Mashable.[15]
Search Interest
External References
[1]Twitter – @TayandYou
[2]Tay.ai – About Page
[3]4plebs – anon’s post
[4]Washington Post – Trolls turned Tay, Microsoft’s fun millennial AI bot, into a genocidal maniac
[5]The Guardian – Microsoft scrambles to limit PR damage over abusive AI bot Tay
[6]Exploring Possibility Space – Tay Twist
[7]CNBC– Tay Microsoft’s AI program is back online
[8]Microsoft – Learning from Tays introduction
[9]Engadget – Microsofts Tay AI makes a brief baffling return
[10]Fortune- Microsofts Tay AI Bot Returns
[11]IBI Times – Microsoft Tay AI returns
[12]Tech Crunch – Microsoft AI bot Tay returns to Twitter
[13]Forbes – Microsofts Tay AI Makes A Brief Return
[14]The Guardian – Microsofts racist chatbot returns with drug-smoking Twitter meltdown
[15]Mashable – Microsofts Tay chatbot returns briefly