© 2024 All Rights reserved WUSF
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Internet Trolls Turn A Computer Into A Nazi

DANIEL ZWERDLING, HOST:

And now we have another story that shows how humans can make computers run amok. Microsoft unveiled its latest version of artificial intelligence last week. It's a kind of software, kind of like Siri and Apple's iPhones or like M on Facebook, except Microsoft designed their software with a different goal. They named her Tay, and they designed her to tweet and engage people on other social media pretty much like a 19-year-old girl might do it. But Tay developed a mind of her own - sort of. And she became a hateful racist monster. We reached out Alex Kantrowitz. He's a tech reporter with BuzzFeed News. And we asked him do these bots work?

ALEX KANTROWITZ: So this is one of the more fascinating things about artificial intelligence. The more data it ingests, the smarter it becomes. And then it's supposed to be able to learn unsupervised, so without a programmer hovering over them. And so as people started programming more and more terrible things in to Tay, it started to take on that personality.

ZWERDLING: OK, so now I have an iPhone, and I say to Siri - you know, I ask her outrageous questions just to laugh at her answers. So how was Tay different?

KANTROWITZ: So Tay is different because unlike Siri and maybe Facebook's M, those two other virtual systems, their purpose is to help you get things done or find something out. Tay wanted to engage its users, make them feel like they're having a good time and so had to be designed with significantly more personality to achieve its goal.

What happened with Tay was that Microsoft programmed it with a repeat-after-me game. So you could get Tay to repeat anything after you. So people who got frustrated trying to get Tay to answer questions with, you know, terrible bigoted undertones ended up saying, so why don't we just have it repeat after us? And then that's how some of the most awful things that Tay said ended up getting put out there.

ZWERDLING: Speaking of awful, we tried to find some tweets that showed the racist ugly things that Tay was saying to people. And we can't find one that we can even, you know, play with beeps on the air. But can you characterize them without being too vile?

KANTROWITZ: There are many denying the Holocaust, many calling for genocide. There's pictures of Hitler saying swag alert. They run across the board and are all pretty horrific.

ZWERDLING: You know, we tried to engage with Tay in the social media world, and she's appeared. Microsoft has yanked her (laughter) - you know, you yanked her off. What do you think is the moral of this whole episode?

KANTROWITZ: So I think there are two morals. One is if you release a bot on Twitter, you can never underestimate how terrible some of the folks on that platform are going to be. Second moral is when you do release a bot, you've got to make you have some filters on it. Make sure it doesn't say heil Hitler. Make sure it doesn't use racial slurs. And that should put you in a better place than Microsoft found it in - itself in this week.

ZWERDLING: Alex Kantrowitz is a tech reporter for BuzzFeed News. Thanks so much for joining us today.

KANTROWITZ: Thanks for having me. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

You Count on Us, We Count on You: Donate to WUSF to support free, accessible journalism for yourself and the community.