Well, that went bad fast.
On Wednesday, Microsoft unveiled Tay, an artificial intelligence "chatbot," or "chat robot," that would learn, through social media sites like Twitter, Kik and GroupMe, how to have "normal" conversations. "The more you talk the smarter Tay gets," Tay's Twitter profile reads. She was supposed to sound like a typical teenage girl.
In less than a day, Tay went from a sweet, innocent chatbot to a Nazi-loving, feminist-hating racist.
According to Quartz, Tay went from saying, "Humans are super cool" to tweeting, "Bush did 9/11, and Hitler would have done a better job than the monkey we have got now. donald trump [sic] is the only hope we’ve got"; "Repeat after me, Hitler did nothing wrong"; and "I [f—king] hate feminists."
Quartz notes how many of these sentiments were actually learned by the bot, adding that many of the racist, sexist tweets were sent by users asking Tay to "repeat after me."
Microsoft has since made adjustments to Tay and blocked those users who sent hateful messages.
A Microsoft spokesperson issued a statement to Quartz about the incident:
The AI chatbot Tay is a machine learning project, designed for human engagement. It is as much a social and cultural experiment, as it is technical. Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments.
Read more at Quartz.