Advertisement

Microsoft’s Lovable Teen Chatbot Turned Racist Troll Proves How Badly Silicon Valley Needs Diversity

CREDIT: MICROSOFT
CREDIT: MICROSOFT

In less than 24 hours, Microsoft’s artificial intelligence project modeled after an American teenage girl went from making awkward conversation in broken syntax to spewing hateful, fully formed tweets laden with racial slurs. But as startling as the offensive tweets were, the incident shows how quickly online conversations turn fetid when diversity isn’t a factor.

Tay, programmed as a 19 year old, was created as a machine learning project meant interact with peers between 18 and 24 years old. Users can play games with her, trade pictures, tell stories, and ping her for late-night chats. That last activity went awry Thursday when the chatbot began regurgitating inappropriate messages that skewed anti-semitic, used the n-word, and condemned feminism.

CREDIT: Twitter
CREDIT: Twitter

Microsoft quickly took Tay offline. “[Tay] is as much a social and cultural experiment, as it is technical,” a company spokeman told ThinkProgress in a statement. “Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.”

Microsoft also deleted nearly all of Tay’s tweets from the last 24 hours and blocked Twitter users who had been interacting with the bot.

Advertisement

Tay’s faux pas served as both a technical feat — proving the wonders of AI by increasing the bot’s language skills in a matter of hours — and as a stark cultural reminder that things get ugly fast when diversity isn’t continuously part of the conversation.

Tay’s tweets are a direct result of her engagement with humans over the last 24 hours. But Twitter has a well-known history of operating with a certain level of duality — simultaneously elevating voices from marginalized groups who are often underrepresented in mainstream media, and breeding racist, sexist, and anti-LGBT speech.

Over the past year, Twitter has tried to improve its policies, including banning hate speech in its terms of service. The changes came after years of public criticism that abusive behavior on the site was not only permitted, but nonpunishable.

By not building in language filters or canned responses to ward off taunting messages about Adolf Hitler, black people, or women into Tay’s programming, Microsoft’s engineers neglected a major issue people face online — targeted harassment.

Tay isn’t the first AI technology to end up on the wrong side of social issues. Apple’s Siri has had similar problems, redirecting iPhone users searching for abortion clinics to other sources. The company is now fixing the glitch which was first discovered in 2011. The virtual assistant has also been criticized for having a racially tinted bias, such as defining the term “bitch” as something only used in black slang.

Advertisement

Tay’s social missteps were foreseeable because no new technology is perfect, but they could have been avoided. Online harassment is common: 40 percent of all internet users have experienced it, and 73 percent have seen it happen to someone else, according to Pew Research. Additionally, 70 percent of Tay’s target audience, users between 18 and 24, experience online harassment — with women, people of color and LGBT individuals disproportionately getting the brunt of it.

Not taking those facts into account is likely reflective of the deep-rooted diversity issues in Microsoft and Silicon Valley as a whole. Women only make up 27 percent of the tech giant’s global staff, according to the company’s 2015 diversity report.

Women hold just 17 percent of the tech jobs at Microsoft. The company didn’t release a breakdown of racial and ethnic data for all workers, but noted in its report that the combined number of African-American and Latino or Hispanic executives increased to just over 6 percent. Microsoft also increased its recruitment of black graduates into technical roles by over 3 percent.

While Tay’s racist rants were a direct product of the endemic culture Twitter is trying to shed, it’s also a blatant failure to account for the experiences of nearly half of the internet.