© 2024 All Rights reserved WUSF
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

If AI provides false information, who takes the blame?

AYESHA RASCOE, HOST:

If someone went online and falsely said that you, yes, you had tried to join the Taliban shortly after the September 11 attacks, you could probably sue them for defamation. But what if an AI chatbot belonging to a company did that? Could you sue the company? In fact, can corporations even be held liable for what their chatbots say if they can't control every word it says? Michael Karanicolas is the executive director of the UCLA Institute for Technology Law and Policy, and he joins us now. Welcome to the program.

MICHAEL KARANICOLAS: Thanks so much for having me. I'm interested to discuss this topic.

RASCOE: This stuff is already happening. Like a lawsuit was filed against a company because it's AI-driven search engine said a man tried to join the Taliban. And there's another lawsuit where a man says he was falsely accused of embezzlement by a chatbot. So the litigious future is here. Is our legal system ready for it?

KARANICOLAS: There is a whole universe of harms that is likely going to start manifesting as these systems become popularized and integrated into our daily lives. The examples that you mentioned around defamation are part of that, but you can also imagine situations where an AI chat advised a person to do a medical procedure on themselves that was dangerous, to ingest a particular kind of mushroom that's not safe. There's all kinds of much more severe harms. This is what the legal system essentially has to figure out - do we place the burdens for these harms on the users of these systems, on the companies? Is there a third party that we feel is responsible? And it's going to take some catching up.

RASCOE: If the thing doing the talking or producing the images and videos isn't human, can the business that owns it be held accountable for it?

KARANICOLAS: There's a principle across a lot of digital speech that says that online intermediaries are usually not responsible for the content that they host. So if somebody goes on to Facebook and writes something defamatory about me, I am not able to sue Facebook about that. The difference between that kind of example and these AI chatbots is there's no human individual that you can actually go back to, because what these AI language models are doing is they're kind of amalgamating speech together from, you know, millions and millions of different sources that they're finding online. The companies that own these systems certainly have plenty of money behind them. If the AI is responsible, then ultimately, the company is responsible, and ultimately you can assign responsibility to a corporate individual in that way.

RASCOE: Some states are trying to get on top of this issue. Colorado has a new consumer protection law focusing on AI. And in California, there's a bill in the works requiring some companies to do safety testing and reports to avoid liability in some instances. Do you think these approaches will be enough?

KARANICOLAS: There has been this huge raft of legislation proposed, especially in California, but across a number of different states. And there's also been legislation proposed at the federal level. But because of how the party dynamics work, these things are much more likely to get passed at the state level. So it's too early to tell the actual impact that these different legislative models are going to have. But my hope at the end of the day, and one of the big advantages that state legislation has over federal models, is that it can be iterative. You can get things passed and then revised a couple of years later, and so it allows for better models of regulation to emerge.

RASCOE: With these concerns that people are raising, there's also the inevitable pushback from these companies saying that being held liable could thwart innovation.

KARANICOLAS: Well, that's always the tension that regulators face. I think in a lot of ways, it depends on the harm that we're talking about, and the scale of risk and the scale of damage that we're willing to countenance in order to foster innovation. And it's also a question of the degree to which this just needs to be priced in. So if AI is causing particular harms, it's a question of whether we expect the companies to shoulder the cost of these harms and to pass that on to their shareholders, to their investors, to their consumers through higher prices, or whether we expect the individuals who suffered these harms to just suck it up and deal with the costs of it in order to support the benefits of these technologies. There are all kinds of different economic models that you can develop which allow risk to be balanced and to be placed in a way that maintains the ability of companies to innovate and consumers to access new products.

RASCOE: That's Michael Karanicolas, executive director of the UCLA Institute for Technology Law and Policy. Thanks so much for speaking to us today.

KARANICOLAS: Thanks so much. I really enjoyed the conversation. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Tags
Ayesha Rascoe is a White House correspondent for NPR. She is currently covering her third presidential administration. Rascoe's White House coverage has included a number of high profile foreign trips, including President Trump's 2019 summit with North Korean leader Kim Jong Un in Hanoi, Vietnam, and President Obama's final NATO summit in Warsaw, Poland in 2016. As a part of the White House team, she's also a regular on the NPR Politics Podcast.
You Count on Us, We Count on You: Donate to WUSF to support free, accessible journalism for yourself and the community.