© 2024 All Rights reserved WUSF
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

People disagree about the risks and benefits of artificial intelligence

MICHEL MARTIN, HOST:

The company OpenAI, known for creating ChatGPT, has had three CEOs in as many days after the board fired former chief executive Sam Altman. Now Microsoft has scooped up Altman in a reshuffling. Now, this might sound like just some internal company drama, but a lot of people who follow the company and the industry say it likely reflects a larger conflict, a culture war over artificial intelligence. Here's Altman testifying at a Senate hearing in May.

(SOUNDBITE OF ARCHIVED RECORDING)

SAM ALTMAN: As this technology advances, we understand that people are anxious about how it could change the way we live. We are, too.

MARTIN: The Economist magazine has a term for people who disagree about the risks and benefits of AI, boomers - and, no, that's not a reference to baby boomers - and doomers. So what's the divide about, and is it having a real effect on the technology and its development? We've called David Kiron for this. He's editorial director at MIT Sloan Management Review. Welcome. Good morning.

DAVID KIRON: Good morning, Michel.

MARTIN: OK, so in the context of AI, who is a boomer, and who is a doomer?

KIRON: The boomers are those who see AI as, like, changing the world for the better. AI's going to be creating better medicine. There's going to be creation of new scientific breakthroughs. Business are going to become more efficient, and, like, workers are going to be able to have access to much more opportunity.

MARTIN: OK. And so who's a doomer?

KIRON: So the doomers are those who see AI as having all kinds of technical challenges. People are going to be able to manipulate the AI for sinister purposes, and there are going to be societal implications that are undesirable, like human creative diminish, human learning - you might not need to learn as many things as you once did - and what is going to happen with the nature of human relationships.

MARTIN: So do you think this divide is having an effect on the development of AI? I mean, just because just in the way we talk about this, you can see people sort of using different language to talk about it. They say - people say, oh, it's hampering the development of AI. Some people say it's appropriately slowing it down, you know? So those are opinions. But in your view is this divide actually having an effect on the development of AI?

KIRON: You know, it might have an effect on the pacing of the development of AI, but I don't think it's having an effect on the overall trajectory of AI. AI is out of the bag. This is happening. And with large language models like ChatGPT, I mean, they're putting in all kinds of guardrails, but AI researchers are finding ways around these guardrails, and sinister actors are going to be able to develop their own LLMs that can serve their own purposes.

MARTIN: So the U.S. government - I think many people will have seen this - is one of the entities seeking to put guardrails around AI. Would you put them in the doomer category?

KIRON: They're trying to straddle the line. They see the ill effects that are possible. They see the positive benefits. They certainly see sort of the benefits to the economy. But they're trying to control or manage the pace of development. But again, this is a Sisyphean effort on their part.

MARTIN: So we've talked a little bit about this division, right? And obviously people in the field kind of see this. But is there any consensus within the industry over what responsible AI looks like?

KIRON: We convened a panel of experts, about a dozen people, and they all said that you need to have a centralized team within the organization. And as you operationalize AI throughout the organization, you need responsible AI people, like, where the AI is actually happening. But you need both, a centralized and a decentralized approach to developing responsible AI in organizations.

MARTIN: Obviously, this is still an evolving story, but if Altman stays with Microsoft, does that mean something?

KIRON: It means that Microsoft - 'cause a lot of other people from OpenAI would probably go with him, so it's not just Altman. And it - that just means Microsoft is going to have more direct control over the technology that OpenAI was developing that they were already investing in. So it's - the main benefits are going to accrue to Microsoft.

MARTIN: But overall, in sort of the development of this, I mean, obviously I don't know, you know, how significant, you know, one person is, but obviously he brings a lot of talent with him. Does that mean something about the direction of the industry overall?

KIRON: I don't think so. I don't think so. It's - he's one player. Some other people left OpenAI to form their own anthropic AI business, which they were intending to put more guardrails on the technology. So there - you know, there's a cast of characters in the AI field, and they're moving around. And the development of AI is going to be dependent on them and the generation that follows. So I wouldn't put too much emphasis on one individual, however brilliant they are.

MARTIN: That's David Kiron with MIT Sloan Management Review. Mr. Kiron, thanks so much for sharing these insights with us.

KIRON: Thank you, Michel. Appreciate it. Transcript provided by NPR, Copyright NPR.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

You Count on Us, We Count on You: Donate to WUSF to support free, accessible journalism for yourself and the community.