© 2024 All Rights reserved WUSF
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Supreme Court showdown for Google, Twitter and the social media world

The U.S. Supreme Court hears arguments this week in two cases that test Section 230, the law that provides tech companies a legal shield over what their users post online.
Celal Gunes
/
Anadolu Agency via Getty Images
The U.S. Supreme Court hears arguments this week in two cases that test Section 230, the law that provides tech companies a legal shield over what their users post online.

In November 2015, ISIS terrorists carried out coordinated attacks across Paris, killing 130 people and injuring 400. Among the dead was Nohemi Gonzalez, a 23-year-old American studying abroad who was the first person in her large family to graduate from college. This week, lawyers for her family and others are in the Supreme Court challenging a law enacted more than a quarter century ago—a law that protects social media companies from what the families see as the role of internet companies in aiding and abetting terrorist attacks.

How the court rules could be a gamechanger for American law, society, and social media platforms that are some of the most valuable businesses in the world.

What the law says

At the center of two cases to be argued over two days is Section 230 of the 1996 Communications Decency Act, passed by Congress when internet platforms were just beginning. In just 26 words, Section 230 draws a distinction between interactive computer service providers and other purveyors of information. Whereas newspapers and broadcasters can be sued for defamation and other wrongful conduct, Section 230 says that websites are not publishers or speakers and cannot be sued for material that appears on those sites. Essentially, the law treats web platforms the same way that it treats the telephone. And just like phone companies, websites that are host to speakers cannot be sued for what the speakers say or do.

At least that is the way the lower courts have uniformly interpreted Section 230. They have said that under the law, social media companies are immune from being sued for civil damages over most material that appears on their platforms. That is so, even though, at the same time, the law has an apparently contrary objective: It encourages social media companies to remove material that is obscene, lewd, excessively violent, harassing or otherwise objectionable.

The attack at the heart of the arguments

This week's cases attempt to thread that needle. The Gonzalez family and the families of other terrorism victims are suing Google, Twitter, Facebook and other social media companies under the federal Anti-Terrorism Act, which specifically allows civil damage claims for aiding and abetting terrorism. The families allege that the companies did more than simply provide platforms for communication. Rather, they contend, that by recommending ISIS videos to those who might be interested, they were seeking to get more viewers and increase their ad revenue.

Representing the terrorism victims against Google and Twitter, lawyer Eric Schnapper will tell the Supreme Court this week that when Section 230 was enacted, social media companies wanted people to subscribe to their services, but today the economic model is different.

"Now most of the money is made by advertisements, and social media companies make more money the longer you are online," he says, adding that one way to do that is by algorithms that recommend other related material to keep users online longer.

What's more, he argues, modern social media company executives knew the dangers of what they were doing. In 2016, he says, they met with high government officials who told them of the dangers posed by ISIS videos, and how they were used for recruitment, propaganda, fundraising, and planning.

"The attorney general, the director of the FBI, the director of national intelligence, and the then-White House chief of staff . . . those government officials . . . told them exactly that," he says.

Google general counsel Halimah DeLaine Prado vehemently denies any such wrongdoing.

"We believe that there's no place for extremist content on any of our products or platforms," she says, noting that Google has "heavily invested in human review" and "smart detection technology," to "make sure that happens."

Prado acknowledges that social media companies today are nothing like the social media companies of 1996, when the interactive internet was an infant industry. But, she says, if there is to be a change in the law, that is something that should be done by Congress, not the courts.

The choice before the court

Daniel Weitzner, the founding director of the MIT Internet Policy Research Initiative, helped draft Section 230 and get it passed in 1996.

"Congress had a really clear choice in its mind," he says. "Was the internet going to be like the broadcast media that were pretty highly regulated?" Or, was it going to be like "the town square or the printing press?" Congress, he says, "chose the town square and the printing press." But, he adds, that approach is now at risk: "The Supreme court now really is in a moment where it could dramatically limit the diversity of speech that the internet enables."

There are many "strange bedfellows" among the tech company allies in this week's cases. Groups ranging from the conservative Chamber of Commerce to the libertarian ACLU have filed an astonishing 48 briefs urging the court to leave the status quo in place.

But the Biden administration has a narrower position. Columbia law professor Timothy Wu summarizes the administration's position this way: "It is one thing to be more passively presenting, even organizing information, but when you cross the line into really recommending content, you leave behind the protections of 230."

In short, hyperlinks, grouping certain content together, sorting through billions of pieces of data for search engines, that sort of thing is OK, but actually recommending content that shows or urges illegal conduct is another.

If the Supreme Court were to adopt that position, it would be very threatening to the economic model of social media companies today. The tech industry says there is no easy way to distinguish between aggregating and recommending.

And it likely would mean that these companies would constantly be defending their conduct in court. But filing suit, and getting over the hurdle of showing enough evidence to justify a trial--those are two different things. What's more, the Supreme Court has made it much more difficult to jump that hurdle. The second case the court hears this week, on Wednesday, deals with just that problem.

What makes this week's cases so remarkable is that the Supreme Court has never dealt with Section 230. The fact that the justices have agreed to hear the cases shows that they have concerns. Justice Clarence Thomas has been outspoken about his view that the law should be narrowly interpreted, meaning little protection for social media companies. Justice Samuel Alito has indicated he might agree with that. But the views of the other justices are something of a black box.

The cases are Gonzalez v. Google LLC and Twitter, Inc. v. Taamneh.

Jordan Jackson contributed to this story

Copyright 2023 NPR. To see more, visit https://www.npr.org.

Nina Totenberg is NPR's award-winning legal affairs correspondent. Her reports air regularly on NPR's critically acclaimed newsmagazines All Things Considered, Morning Edition, and Weekend Edition.
You Count on Us, We Count on You: Donate to WUSF to support free, accessible journalism for yourself and the community.