AYESHA RASCOE, HOST:
The U.S. Intelligence Community says a massive amount of disinformation may be coming to your social media feeds. TikTok, X and Facebook will all be flooded with fake claims and manipulated images and video about both Donald Trump and Kamala Harris. X CEO Elon Musk himself reposted a manipulated but realistic video of Kamala Harris, saying words she never said. And almost immediately after Donald Trump was shot in the ear last month, false claims about the attack began circulating. Liz Landers is a national correspondent at Scripps, who focuses on disinformation. She joins us now. Welcome to the program.
LIZ LANDERS: Hi, thanks for having me.
RASCOE: So let's start with the election. Like, how does 2024 compare with disinformation in previous elections?
LANDERS: Yeah, I think one of the first things that I would say about this election cycle is that the intelligence community is being very proactive, trying to speak with reporters and also therefore disseminate information through the public about what they are tracking. China, Russia and Iran continue to be the top three countries that want to try to influence U.S. politics and policy. One of the things that the intelligence community briefed reporters on that's sort of new is that these foreign actors are using commercial firms, like marketing firms, like PR firms, and they're based in their respective countries. They spoke to and gave an example of a Moscow-based marketing firm that Russia has been using that makes realistic-looking content that has a pro-Russia agenda and that that kind of information is being spread all over the internet and that Americans are consuming that.
RASCOE: Is that the main subject of the disinformation that's being seen, like, say, pro-Russia content or pro-China content?
LANDERS: Sure. Some of the content is pro-Russia in nature. Something else, though, that the intelligence community has told us to be aware of is that there may be content that's floating around that may deepen and further divisions that exist in American society, so maybe on some hot topic issues, some of the culture war issues that we've seen percolate in the American election in the last few years.
RASCOE: How would you compare disinformation here to other countries?
LANDERS: The U.S. has a lot of disinformation that floats around on social media because of the First Amendment. The First Amendment in this country is very strong. It is very well protected, and because that is codified in our Constitution, whereas it is maybe not that way in another democracy like in France or in the U.K., the First Amendment in the United States really protects our ability to have free speech, whether that speech is true or not.
RASCOE: Well, are there ways to crack down on disinformation that fit within the First Amendment? One of Elon Musk's responses about the Harris AI video that he shared was to point out that parity is legal expression.
LANDERS: Absolutely. And he's right. I think the main concern about policing disinformation is censorship. And that is something that I know some of these heads of federal agencies - I have spoken with Secretary Mayorkas from the Homeland Security Department about this - he is very aware of the fact that DHS has to allow people to have free speech online and that they can keep an eye on what potentially could be domestic terrorism. But their hands are tied to a certain extent in terms of what federal agencies can do until potentially a violent act or a threatening act occurs.
RASCOE: Where else are you seeing disinformation online beyond just the election?
LANDERS: Well, one of the things that we've been tracking at Scripps News is the way that artificial intelligence can be used to create deepfakes that impact everyday people. And one of those stories is the way that deepfake nude images, unfortunately, have become a big part of the conversation when it comes to regulating apps and what is shared on platforms. I'll give an example. We did a story out of Illinois where a 15-year-old high school student had her prom picture taken from a social media account, and some boys in her high school used an app that is downloadable. It's easy to use, and it takes almost no time at all, and they created a nude image of her that is a realistic rendering of her, and she's 15 years old. And this happened to a number of girls in this high school, so the state of Illinois passed legislation at the state level around this after this incident happened in the spring of this year. There are lawmakers up here on Capitol Hill in Washington who are working on trying to ban that kind of content as well. But that's just a way that this can affect an everyday person.
RASCOE: Well, is technology getting too good for normal people to figure out what's real and what's fake?
LANDERS: I think in a lot of ways, it is. And if you look at images now that can be created, you probably will not be able to tell if that is real or not. I think that's why lawmakers here in Washington are trying to add guardrails around this, are trying to require labeling around AI-generated content because the average person really can't tell those kinds of things anymore.
RASCOE: That's Liz Landers of Scripps News. Thank you so much for joining us.
LANDERS: Thanks so much.
(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.
NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.