© 2024 All Rights reserved WUSF
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

Can politicians catch up with AI?

Robot business man go brrrr.
Isaac Lawrence
/
AFP via Getty Images
Robot business man go brrrr.

Experts are warning that artificial intelligence is developing far more rapidly than regulators can keep up with. Is there any chance of picking up that slack?

Who is he? Paul Scharre is an author and the vice president at the Center for a New American Security. His work focuses on artificial intelligence and how it intersects with power.

  • Scharre's work has led him to write a book on the subject: Four Battlegrounds: Power in the Age of Artificial Intelligence.
  • On Tuesday, Congress held a hearing devoted to the conversation about how Washington should oversee this new frontier of tech.
  • Technology is at a crucial turning point in history: As models become more sophisticated, technology experts continue to urge that AI without regulation is a danger to society.
  • What's the big deal? It seems like everyone involved in the conversation wants to curb the tech. The question is how.

  • OpenAI CEO Sam Altman testified at the hearing. Altman oversees the company responsible for creating ChatGPT, the generative AI tool that is already changing the way many of us go about our everyday lives.
  • In his prepared testimony to the Senate subcommittee, Altman wrote: "OpenAI believes that regulation of AI is essential, and we're eager to help policymakers as they determine how to facilitate regulation that balances incentivizing safety while ensuring that people are able to access the technology's benefits."
  • During Tuesday's hearing, varying members of the legislature proposed different safeguards, like requiring licensing or even creating a new federal agency.
  • This comes in response to criticism, including from Scharre, that Congress has been too slow to establish regulations for social media.

  • Want more on tech? Listen to the Consider This episode on how social media use impacts teen mental health.


    Paul Scharre says ensuring the AI systems that are being built are safe is essential.
    Win McNamee / Getty Images
    /
    Getty Images
    Paul Scharre says ensuring the AI systems that are being built are safe is essential.

    What's he saying? Scharre spoke with NPR's Ari Shapiro on what role legislative regulation can actually play in the development of AI.

    On whether Congress can play a significant role in regulating AI:

    There is definitely a valuable role for Congress, but there's a huge disconnect between the pace of the technology, especially in AI, and the pace of lawmaking. 

    So I think there's a real incentive for Congress to move faster, and that's what we see. I think what members of Congress are trying to do here with these hearings is figure out what's going on with AI and then what is the role that government needs to play to regulate this?

    On what role the government should play:

    There's certainly not a consensus. And I think part of it is that it can mean so many different things. It can be facial recognition, or be used in finance or medicine. And there's going to be a lot of industry-specific regulation. 

    On whether Congress will take meaningful action on the matter:

    A pessimistic answer is we're probably likely to see not very much.

    That's been the story so far with social media, for example. But I think, you know, the place where there's value here would really be if we can get just a couple specific kinds of narrow regulation. There was some talk about a licensing regime for training these very powerful models that probably make some sense at this point, given some of their characteristics. And then things like requirements to label AI-generated media. California passed a law like this called the Blade Runner law. I love this term that basically says if you're talking to a bot, it has to disclose that it's a bot. That's pretty sensible.

    On the risks of unregulated AI:

    One of the risks is that we see a wide proliferation of very powerful AI systems that are general purpose, that could do lots of good things and lots of bad things.

    And we see some bad actors use them for things like helping to design better chemical or biological weapons or cyber attacks. And it's really hard to defend against that if there aren't guardrails in place and if anyone can access this just as easily as anyone can hop on the internet today. And so thinking about how do we control proliferation, how do we ensure the systems that are being built are safe is really essential.

    So, what now?

  • NPR congressional correspondent Claudia Grisales says lawmakers are at the earliest stages of trying to develop comprehensive AI legislation, adding that "the U.S. is woefully behind other places, such as the European Union."
  • By the time this is published, there will probably be another horrible Wes Anderson-themed AI movie trailer gaining traction on Twitter.
  • Learn more:

  • What if AI could rebuild the middle class?
  • When you realize your favorite new song was written and performed by ... AI
  • AI-generated deepfakes are moving fast. Policymakers can't keep up
  • Copyright 2023 NPR. To see more, visit https://www.npr.org.

    Tags
    Manuela López Restrepo
    Manuela López Restrepo is a producer and writer at All Things Considered. She's been at NPR since graduating from The University of Maryland, and has worked at shows like Morning Edition and It's Been A Minute. She lives in Brooklyn with her cat Martin.
    You Count on Us, We Count on You: Donate to WUSF to support free, accessible journalism for yourself and the community.