© 2025 All Rights reserved WUSF
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Our daily newsletter, delivered first thing weekdays, keeps you connected to your community with news, culture, national NPR headlines, and more.

Why AI models hallucinate

Artificial intelligence chatbots will confidently give you an answer for just about anything you ask them. But those answers aren’t always right.

AI companies call these confident, incorrect responses “hallucinations.” Researchers at OpenAI have been digging into why large language models hallucinate, and say part of the problem is that rankings of AI models reward guesses while penalizing uncertainty.

Here & Now‘s Scott Tong speaks with Ina Fried, chief technology correspondent for Axios.

This article was originally published on WBUR.org.

Copyright 2025 WBUR

Here & Now Newsroom
Thanks to you, WUSF is here — delivering fact-based news and stories that reflect our community.⁠ Your support powers everything we do.