How the Media is Helping AI Spread Lies
It’s not just fake images. An even more disturbing change is happening.
Stories of AI fakes are everywhere. “Thousands have swooned over this MAGA dream girl. She’s made with AI,” a Washington Post headline read. “AI-generated Iran war videos surge as creators use new tech to cash in,” the BBC reported. Brady Tkachuk of the U.S. Olympic Men’s Ice Hockey team “was none too pleased with an AI-generated video the White House released … that appeared to show him criticizing Canada,” Politico explained.
These problems are only getting worse. Even as technologists create tools to help people decipher whether something is real or artificial intelligence, others are honing AI platforms to beat those tools with more realistic images.
News agencies understand this. But many of these same agencies are making an even more insidious problem worse. They’re endangering people’s ability to learn the truth about any topic in the news.
To understand why, we need to stop and look at a revolutionary shift taking place in how we receive information.
For many years, when people have sought out facts on any topic, they’ve turned to Google. What they’ve gotten, in response, is a list of search results. They see names of news agencies and links to individual reports.
That’s changing. Increasingly, AI platforms are providing their own summaries of events in the news. This happens in Google results, where you may see an “AI Overview” or “AI Mode” at the top. It also happens when you type a query into an AI tool, or speak with an AI-powered voice device. In a short time, AI tools have become so popular that a majority of Americans now use them every week.
These results are structured as answers to questions. They tell us about events in the news, as though those things are real. Where do their alleged “facts” come from? Generally, from news agencies. Media giants have teamed up with AI platforms to provide content. Last year, ChatGPT added the Washington Post to its roster. This week, the Associated Press announced it will cut staff as it increases partnerships with AI companies.
Most AI results don’t tell you that the Washington Post or AP “claims that such-and-such happened.” The results treat this material as definitive. Sure, these tools do make links available. An AI user could look closely, piece through the results, and find out what the various sources are for each claim. But incredibly few people do that. Only 1% of people who received an AI summary clicked on any links it offered, according to Pew Research.
The perils are obvious. News agencies get things wrong all the time. In my podcast and newsletter, They Stand Corrected, I fact check the news. In my years on air at NPR and then CNN, where I fact-checked politicians and pundits, I saw how badly the media itself needed fact checking.
Until now, news consumers have at least been aware that they were being told something by a specific news agency. Many have lost trust in those agencies for good reason.
Now, that step in the process is gone. When AI responds to a question with “facts,” there’s no mental trigger to remind you that it’s a claim from a news agency that might have gotten the information wrong.
The fact that AI tools can “hallucinate,” making up their own false answers, is widely recognized. But far too few people recognize the larger problem. These platforms can’t fact check information in the news sources they ingest. They swallow that information whole and spit it back out on request.
A big part of the solution is for people to hold the media accountable for lying. I encourage people to cancel paid subscriptions to failing media giants and let them know why. Money talks. Big Media will only ensure the two ingredients of truth — facts and context — when we demand it.
But there’s also another solution, which I was fascinated to learn about: libraries. It turns out that some libraries are creating their own teams to siphon out fact from fiction and create permanent records.

An organization called the RDA Steering Committee sets guidelines and standards for handling information at libraries worldwide. I spoke with Ahava Cohen, who represents Europe on the committee and chairs the working group on AI. Working at the National Library of Israel in Jerusalem, she’s part of a team sifting through images and written materials about the October 7, 2023, massacres. The library records are meticulous, showing original sources and metadata. And the library keeps concrete evidence in fireproof stacks.
Libraries everywhere can take similar actions. Unlike the media, they’re not competing for clicks with rage-bait headlines. If they commit to rigorously fact-checking materials and documenting sources, then AI tools should be able to turn to them for information. (Ahava’s team is making its findings publicly available, including to AI.)
The news industry should be run by people with an equal commitment to truth-telling. But for now, they’re feeding AI with journalistic failures, paving the way for an even more dangerous era — one that will last long beyond any presidential administration.
Josh Levs is host of They Stand Corrected, the podcast and newsletter fact-checking the media. Find him at joshlevs.com.





Thanks for sharing my latest column, The Contrarian. This issue will last far beyond Trump and any other administration. It's about the very big picture. Will future generations know the truth about what happened in this era? Not with failed legacy media feeding AI. Share thoughts and questions here: https://theystandcorrected.substack.com/